[jira] [Updated] (FLINK-3231) Handle Kinesis-side resharding in Kinesis streaming consumer

2016-06-17 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-3231:
---
Component/s: Kinesis Connector

> Handle Kinesis-side resharding in Kinesis streaming consumer
> 
>
> Key: FLINK-3231
> URL: https://issues.apache.org/jira/browse/FLINK-3231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kinesis Connector, Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
> Fix For: 1.1.0
>
>
> A big difference between Kinesis shards and Kafka partitions is that Kinesis 
> users can choose to "merge" and "split" shards at any time for adjustable 
> stream throughput capacity. This article explains this quite clearly: 
> https://brandur.org/kinesis-by-example.
> This will break the static shard-to-task mapping implemented in the basic 
> version of the Kinesis consumer 
> (https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
> mapping is done in a simple round-robin-like distribution which can be 
> locally determined at each Flink consumer task (Flink Kafka consumer does 
> this too).
> To handle Kinesis resharding, we will need some way to let the Flink consumer 
> tasks coordinate which shards they are currently handling, and allow the 
> tasks to ask the coordinator for a shards reassignment when the task finds 
> out it has found a closed shard at runtime (shards will be closed by Kinesis 
> when it is merged and split).
> We need a centralized coordinator state store which is visible to all Flink 
> consumer tasks. Tasks can use this state store to locally determine what 
> shards it can be reassigned. Amazon KCL uses a DynamoDB table for the 
> coordination, but as described in 
> https://issues.apache.org/jira/browse/FLINK-3211, we unfortunately can't use 
> KCL for the implementation of the consumer if we want to leverage Flink's 
> checkpointing mechanics. For our own implementation, Zookeeper can be used 
> for this state store, but that means it would require the user to set up ZK 
> to work.
> Since this feature introduces extensive work, it is opened as a separate 
> sub-task from the basic implementation 
> https://issues.apache.org/jira/browse/FLINK-3229.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3231) Handle Kinesis-side resharding in Kinesis streaming consumer

2016-06-16 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-3231:
---
Fix Version/s: 1.1.0

> Handle Kinesis-side resharding in Kinesis streaming consumer
> 
>
> Key: FLINK-3231
> URL: https://issues.apache.org/jira/browse/FLINK-3231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
> Fix For: 1.1.0
>
>
> A big difference between Kinesis shards and Kafka partitions is that Kinesis 
> users can choose to "merge" and "split" shards at any time for adjustable 
> stream throughput capacity. This article explains this quite clearly: 
> https://brandur.org/kinesis-by-example.
> This will break the static shard-to-task mapping implemented in the basic 
> version of the Kinesis consumer 
> (https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
> mapping is done in a simple round-robin-like distribution which can be 
> locally determined at each Flink consumer task (Flink Kafka consumer does 
> this too).
> To handle Kinesis resharding, we will need some way to let the Flink consumer 
> tasks coordinate which shards they are currently handling, and allow the 
> tasks to ask the coordinator for a shards reassignment when the task finds 
> out it has found a closed shard at runtime (shards will be closed by Kinesis 
> when it is merged and split).
> We need a centralized coordinator state store which is visible to all Flink 
> consumer tasks. Tasks can use this state store to locally determine what 
> shards it can be reassigned. Amazon KCL uses a DynamoDB table for the 
> coordination, but as described in 
> https://issues.apache.org/jira/browse/FLINK-3211, we unfortunately can't use 
> KCL for the implementation of the consumer if we want to leverage Flink's 
> checkpointing mechanics. For our own implementation, Zookeeper can be used 
> for this state store, but that means it would require the user to set up ZK 
> to work.
> Since this feature introduces extensive work, it is opened as a separate 
> sub-task from the basic implementation 
> https://issues.apache.org/jira/browse/FLINK-3229.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3231) Handle Kinesis-side resharding in Kinesis streaming consumer

2016-06-04 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-3231:
---
Affects Version/s: (was: 1.0.0)
   1.1.0

> Handle Kinesis-side resharding in Kinesis streaming consumer
> 
>
> Key: FLINK-3231
> URL: https://issues.apache.org/jira/browse/FLINK-3231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Tzu-Li (Gordon) Tai
>
> A big difference between Kinesis shards and Kafka partitions is that Kinesis 
> users can choose to "merge" and "split" shards at any time for adjustable 
> stream throughput capacity. This article explains this quite clearly: 
> https://brandur.org/kinesis-by-example.
> This will break the static shard-to-task mapping implemented in the basic 
> version of the Kinesis consumer 
> (https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
> mapping is done in a simple round-robin-like distribution which can be 
> locally determined at each Flink consumer task (Flink Kafka consumer does 
> this too).
> To handle Kinesis resharding, we will need some way to let the Flink consumer 
> tasks coordinate which shards they are currently handling, and allow the 
> tasks to ask the coordinator for a shards reassignment when the task finds 
> out it has found a closed shard at runtime (shards will be closed by Kinesis 
> when it is merged and split).
> We need a centralized coordinator state store which is visible to all Flink 
> consumer tasks. Tasks can use this state store to locally determine what 
> shards it can be reassigned. Amazon KCL uses a DynamoDB table for the 
> coordination, but as described in 
> https://issues.apache.org/jira/browse/FLINK-3211, we unfortunately can't use 
> KCL for the implementation of the consumer if we want to leverage Flink's 
> checkpointing mechanics. For our own implementation, Zookeeper can be used 
> for this state store, but that means it would require the user to set up ZK 
> to work.
> Since this feature introduces extensive work, it is opened as a separate 
> sub-task from the basic implementation 
> https://issues.apache.org/jira/browse/FLINK-3229.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3231) Handle Kinesis-side resharding in Kinesis streaming consumer

2016-01-13 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-3231:
---
Description: 
A big difference between Kinesis shards and Kafka partitions is that Kinesis 
users can choose to "merge" and "split" shards at any time for adjustable 
stream throughput capacity. This article explains this quite clearly: 
https://brandur.org/kinesis-by-example.

This will break the static shard-to-task mapping implemented in the basic 
version of the Kinesis consumer 
(https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
mapping is done in a simple round-robin-like distribution which can be locally 
determined at each Flink consumer task (Flink Kafka consumer does this too).

To handle Kinesis resharding, we will need some way to let the Flink consumer 
tasks coordinate which shards they are currently handling, and allow the tasks 
to ask the coordinator for a shards reassignment when the task finds out it has 
found a closed shard at runtime (shards will be closed by Kinesis when it is 
merged and split).

We need a centralized coordinator state store which is visible to all Flink 
consumer tasks. Tasks can use this state store to locally determine what shards 
it can be reassigned. Amazon KCL uses a DynamoDB table for the coordination, 
but as described in https://issues.apache.org/jira/browse/FLINK-3229, we 
unfortunately can't use KCL for the implementation of the consumer if we want 
to leverage Flink's checkpointing mechanics. For our own implementation, 
Zookeeper can be used for this state store, but that means it would require the 
user to set up ZK to work.

Since this feature introduces extensive work, it is opened as a separate 
sub-task from the basic implementation 
https://issues.apache.org/jira/browse/FLINK-3229.

  was:
A big difference between Kinesis shards and Kafka partitions is that Kinesis 
users can choose to "merge" and "split" shards at any time for adjustable 
stream throughput capacity. This article explains this quite clearly: 
https://brandur.org/kinesis-by-example.

This will break the static shard-to-task mapping implemented in the basic 
version of the Kinesis consumer 
(https://issues.apache.org/jira/browse/FLINK-3211). The static shard-to-task 
mapping is done in a simple round-robin-like distribution which can be locally 
determined at each Flink consumer task (Flink Kafka consumer does this too).

To handle Kinesis resharding, we will need some way to let the Flink consumer 
tasks coordinate which shards they are currently handling, and allow the tasks 
to ask the coordinator for a shards reassignment when the task finds out it has 
found a closed shard at runtime (shards will be closed by Kinesis when it is 
merged and split).

A possible approach to this is a centralized coordinator state store which is 
visible to all Flink consumer tasks. Tasks can use this state store to locally 
determine what shards it can be reassigned. Zookeeper can be used for this 
state store, but that means it would require the user to set up ZK to work.

Since this feature introduces extensive work, it is opened as a separate 
sub-task from the basic implementation 
https://issues.apache.org/jira/browse/FLINK-3211.


> Handle Kinesis-side resharding in Kinesis streaming consumer
> 
>
> Key: FLINK-3231
> URL: https://issues.apache.org/jira/browse/FLINK-3231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.0.0
>Reporter: Tzu-Li (Gordon) Tai
>
> A big difference between Kinesis shards and Kafka partitions is that Kinesis 
> users can choose to "merge" and "split" shards at any time for adjustable 
> stream throughput capacity. This article explains this quite clearly: 
> https://brandur.org/kinesis-by-example.
> This will break the static shard-to-task mapping implemented in the basic 
> version of the Kinesis consumer 
> (https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
> mapping is done in a simple round-robin-like distribution which can be 
> locally determined at each Flink consumer task (Flink Kafka consumer does 
> this too).
> To handle Kinesis resharding, we will need some way to let the Flink consumer 
> tasks coordinate which shards they are currently handling, and allow the 
> tasks to ask the coordinator for a shards reassignment when the task finds 
> out it has found a closed shard at runtime (shards will be closed by Kinesis 
> when it is merged and split).
> We need a centralized coordinator state store which is visible to all Flink 
> consumer tasks. Tasks can use this state store to locally determine what 
> shards it can be reassigned. Amazon KCL uses a DynamoDB table for the 
> coordination, but as described in 
> 

[jira] [Updated] (FLINK-3231) Handle Kinesis-side resharding in Kinesis streaming consumer

2016-01-13 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-3231:
---
Description: 
A big difference between Kinesis shards and Kafka partitions is that Kinesis 
users can choose to "merge" and "split" shards at any time for adjustable 
stream throughput capacity. This article explains this quite clearly: 
https://brandur.org/kinesis-by-example.

This will break the static shard-to-task mapping implemented in the basic 
version of the Kinesis consumer 
(https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
mapping is done in a simple round-robin-like distribution which can be locally 
determined at each Flink consumer task (Flink Kafka consumer does this too).

To handle Kinesis resharding, we will need some way to let the Flink consumer 
tasks coordinate which shards they are currently handling, and allow the tasks 
to ask the coordinator for a shards reassignment when the task finds out it has 
found a closed shard at runtime (shards will be closed by Kinesis when it is 
merged and split).

We need a centralized coordinator state store which is visible to all Flink 
consumer tasks. Tasks can use this state store to locally determine what shards 
it can be reassigned. Amazon KCL uses a DynamoDB table for the coordination, 
but as described in https://issues.apache.org/jira/browse/FLINK-3211, we 
unfortunately can't use KCL for the implementation of the consumer if we want 
to leverage Flink's checkpointing mechanics. For our own implementation, 
Zookeeper can be used for this state store, but that means it would require the 
user to set up ZK to work.

Since this feature introduces extensive work, it is opened as a separate 
sub-task from the basic implementation 
https://issues.apache.org/jira/browse/FLINK-3229.

  was:
A big difference between Kinesis shards and Kafka partitions is that Kinesis 
users can choose to "merge" and "split" shards at any time for adjustable 
stream throughput capacity. This article explains this quite clearly: 
https://brandur.org/kinesis-by-example.

This will break the static shard-to-task mapping implemented in the basic 
version of the Kinesis consumer 
(https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
mapping is done in a simple round-robin-like distribution which can be locally 
determined at each Flink consumer task (Flink Kafka consumer does this too).

To handle Kinesis resharding, we will need some way to let the Flink consumer 
tasks coordinate which shards they are currently handling, and allow the tasks 
to ask the coordinator for a shards reassignment when the task finds out it has 
found a closed shard at runtime (shards will be closed by Kinesis when it is 
merged and split).

We need a centralized coordinator state store which is visible to all Flink 
consumer tasks. Tasks can use this state store to locally determine what shards 
it can be reassigned. Amazon KCL uses a DynamoDB table for the coordination, 
but as described in https://issues.apache.org/jira/browse/FLINK-3229, we 
unfortunately can't use KCL for the implementation of the consumer if we want 
to leverage Flink's checkpointing mechanics. For our own implementation, 
Zookeeper can be used for this state store, but that means it would require the 
user to set up ZK to work.

Since this feature introduces extensive work, it is opened as a separate 
sub-task from the basic implementation 
https://issues.apache.org/jira/browse/FLINK-3229.


> Handle Kinesis-side resharding in Kinesis streaming consumer
> 
>
> Key: FLINK-3231
> URL: https://issues.apache.org/jira/browse/FLINK-3231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Affects Versions: 1.0.0
>Reporter: Tzu-Li (Gordon) Tai
>
> A big difference between Kinesis shards and Kafka partitions is that Kinesis 
> users can choose to "merge" and "split" shards at any time for adjustable 
> stream throughput capacity. This article explains this quite clearly: 
> https://brandur.org/kinesis-by-example.
> This will break the static shard-to-task mapping implemented in the basic 
> version of the Kinesis consumer 
> (https://issues.apache.org/jira/browse/FLINK-3229). The static shard-to-task 
> mapping is done in a simple round-robin-like distribution which can be 
> locally determined at each Flink consumer task (Flink Kafka consumer does 
> this too).
> To handle Kinesis resharding, we will need some way to let the Flink consumer 
> tasks coordinate which shards they are currently handling, and allow the 
> tasks to ask the coordinator for a shards reassignment when the task finds 
> out it has found a closed shard at runtime (shards will be closed by Kinesis 
> when it is merged and split).
> We