[jira] [Commented] (FLINK-4280) New Flink-specific option to set starting position of Kafka consumer without respecting external offsets in ZK / Broker

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601014#comment-15601014
 ] 

ASF GitHub Bot commented on FLINK-4280:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2509
  
Rebased on recent Kafka consumer changes, fixed failing Kafka 0.10 
exactly-once tests, and added integration tests 
(`testStartFromEarliestOffsets`, `testStartFromLatestOffsets`, and 
`testStartFromGroupOffsets`) for the new explicit startup modes.

However, I'm bumping into Kafka consumer config errors when running the 
`testStartFromEarliestOffsets` in versions 0.9 and 0.10. Still investigating 
the issue, currently `testStartFromEarliestOffsets` is deliberately commented 
out in 0.9 and 0.10 IT tests for some early reviews.


> New Flink-specific option to set starting position of Kafka consumer without 
> respecting external offsets in ZK / Broker
> ---
>
> Key: FLINK-4280
> URL: https://issues.apache.org/jira/browse/FLINK-4280
> Project: Flink
>  Issue Type: New Feature
>  Components: Kafka Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
> Fix For: 1.2.0
>
>
> Currently, to start reading from the "earliest" and "latest" position in 
> topics for the Flink Kafka consumer, users set the Kafka config 
> {{auto.offset.reset}} in the provided properties configuration.
> However, the way this config actually works might be a bit misleading if 
> users were trying to find a way to "read topics from a starting position". 
> The way the {{auto.offset.reset}} config works in the Flink Kafka consumer 
> resembles Kafka's original intent for the setting: first, existing external 
> offsets committed to the ZK / brokers will be checked; if none exists, then 
> will {{auto.offset.reset}} be respected.
> I propose to add Flink-specific ways to define the starting position, without 
> taking into account the external offsets. The original behaviour (reference 
> external offsets first) can be changed to be a user option, so that the 
> behaviour can be retained for frequent Kafka users that may need some 
> collaboration with existing non-Flink Kafka consumer applications.
> How users will interact with the Flink Kafka consumer after this is added, 
> with a newly introduced {{flink.starting-position}} config:
> {code}
> Properties props = new Properties();
> props.setProperty("flink.starting-position", "earliest/latest");
> props.setProperty("auto.offset.reset", "..."); // this will be ignored (log a 
> warning)
> props.setProperty("group.id", "...") // this won't have effect on the 
> starting position anymore (may still be used in external offset committing)
> ...
> {code}
> Or, reference external offsets in ZK / broker:
> {code}
> Properties props = new Properties();
> props.setProperty("flink.starting-position", "external-offsets");
> props.setProperty("auto.offset.reset", "earliest/latest"); // default will be 
> latest
> props.setProperty("group.id", "..."); // will be used to lookup external 
> offsets in ZK / broker on startup
> ...
> {code}
> A thing we would need to decide on is what would the default value be for 
> {{flink.starting-position}}.
> Two merits I see in adding this:
> 1. This compensates the way users generally interpret "read from a starting 
> position". As the Flink Kafka connector is somewhat essentially a 
> "high-level" Kafka consumer for Flink users, I think it is reasonable to add 
> Flink-specific functionality that users will find useful, although it wasn't 
> supported in Kafka's original consumer designs.
> 2. By adding this, the idea that "the Kafka offset store (ZK / brokers) is 
> used only to expose progress to the outside world, and not used to manipulate 
> how Kafka topics are read in Flink (unless users opt to do so)" is even more 
> definite and solid. There was some discussion in this PR 
> (https://github.com/apache/flink/pull/1690, FLINK-3398) on this aspect. I 
> think adding this "decouples" more Flink's internal offset checkpointing from 
> the external Kafka's offset store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2509: [FLINK-4280][kafka-connector] Explicit start position con...

2016-10-23 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2509
  
Rebased on recent Kafka consumer changes, fixed failing Kafka 0.10 
exactly-once tests, and added integration tests 
(`testStartFromEarliestOffsets`, `testStartFromLatestOffsets`, and 
`testStartFromGroupOffsets`) for the new explicit startup modes.

However, I'm bumping into Kafka consumer config errors when running the 
`testStartFromEarliestOffsets` in versions 0.9 and 0.10. Still investigating 
the issue, currently `testStartFromEarliestOffsets` is deliberately commented 
out in 0.9 and 0.10 IT tests for some early reviews.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4883) Prevent UDFs implementations through Scala singleton objects

2016-10-23 Thread Renkai Ge (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600886#comment-15600886
 ] 

Renkai Ge commented on FLINK-4883:
--

I want to work on this issue.

> Prevent UDFs implementations through Scala singleton objects
> 
>
> Key: FLINK-4883
> URL: https://issues.apache.org/jira/browse/FLINK-4883
> Project: Flink
>  Issue Type: Bug
>Reporter: Stefan Richter
>
> Currently, user can create and use UDFs in Scala like this:
> {code}
> object FlatMapper extends RichCoFlatMapFunction[Long, String, (Long, String)] 
> {
> ...
> }
> {code}
> However, this leads to problems as the UDF is now a singleton that Flink 
> could use across several operator instances, which leads to job failures. We 
> should detect and prevent the usage of singleton UDFs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4469) Add support for user defined table function in Table API & SQL

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600782#comment-15600782
 ] 

ASF GitHub Bot commented on FLINK-4469:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2653
  
Hi @fhueske @twalthr ,  do you have any thoughts on this pull request ? 


> Add support for user defined table function in Table API & SQL
> --
>
> Key: FLINK-4469
> URL: https://issues.apache.org/jira/browse/FLINK-4469
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Normal user-defined functions, such as concat(), take in a single input row 
> and output a single output row. In contrast, table-generating functions 
> transform a single input row to multiple output rows. It is very useful in 
> some cases, such as look up in HBase by rowkey and return one or more rows.
> Adding a user defined table function should:
> 1. inherit from UDTF class with specific generic type T
> 2. define one or more evel function. 
> NOTE: 
> 1. the eval method must be public and non-static.
> 2. eval should always return java.lang.Iterable or scala.collection.Iterable 
> with the generic type T.
> 3. the generic type T is the row type returned by table function. Because of 
> Java type erasure, we can’t extract T from the Iterable.
> 4. eval method can be overload. Blink will choose the best match eval method 
> to call according to parameter types and number.
> {code}
> public class Word {
>   public String word;
>   public Integer length;
> }
> public class SplitStringUDTF extends UDTF {
> public Iterable eval(String str) {
> if (str == null) {
> return new ArrayList<>();
> } else {
> List list = new ArrayList<>();
> for (String s : str.split(",")) {
> Word word = new Word(s, s.length());
> list.add(word);
> }
> return list;
> }
> }
> }
> // in SQL
> tableEnv.registerFunction("split", new SplitStringUDTF())
> tableEnv.sql("SELECT a, b, t.* FROM MyTable CROSS APPLY split(c) AS t(w,l)")
> // in Java Table API
> tableEnv.registerFunction("split", new SplitStringUDTF())
> // rename split table columns to “w” and “l”
> table.crossApply("split(c)", "w, l")  
>  .select("a, b, w, l")
> // without renaming, we will use the origin field names in the POJO/case/...
> table.crossApply("split(c)")
>  .select("a, b, word, length")
> // in Scala Table API
> val split = new SplitStringUDTF()
> table.crossApply(split('c), 'w, 'l)
>  .select('a, 'b, 'w, 'l)
> // outerApply for outer join to a UDTF
> table.outerApply(split('c))
>  .select('a, 'b, 'word, 'length)
> {code}
> Here we introduce CROSS/OUTER APPLY keywords to join table functions , which 
> is used in SQL Server. We can discuss the API in the comment. 
> Maybe the {{UDTF}} class should be replaced by {{TableFunction}} or something 
> others, because we have introduced {{ScalarFunction}} for custom functions, 
> we need to keep consistent. Although, I prefer {{UDTF}} rather than 
> {{TableFunction}} as the former is more SQL-like and the latter maybe 
> confused with DataStream functions. 
> **This issue is blocked by CALCITE-1309, so we need to wait Calcite fix this 
> and release.**
> See [1] for more information about UDTF design.
> [1] 
> https://docs.google.com/document/d/15iVc1781dxYWm3loVQlESYvMAxEzbbuVFPZWBYuY1Ek/edit#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2653: [FLINK-4469] [table] Add support for user defined table f...

2016-10-23 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2653
  
Hi @fhueske @twalthr ,  do you have any thoughts on this pull request ? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4879) class KafkaTableSource should be public just like KafkaTableSink

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600715#comment-15600715
 ] 

ASF GitHub Bot commented on FLINK-4879:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2678
  
+1 this looks fine to me, no problem with making it public.


> class KafkaTableSource should be public just like KafkaTableSink
> 
>
> Key: FLINK-4879
> URL: https://issues.apache.org/jira/browse/FLINK-4879
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Table API & SQL
>Affects Versions: 1.1.1, 1.1.3
>Reporter: yuemeng
>Priority: Minor
> Fix For: 1.1.4
>
> Attachments: 0001-class-KafkaTableSource-should-be-public.patch
>
>
> *class KafkaTableSource should be public just like KafkaTableSink,by 
> default,it's modifier is default ,and we cann't access out of it's package*,
> for example:
>  {code}
> def createKafkaTableSource(
>   topic: String,
>   properties: Properties,
>   deserializationSchema: DeserializationSchema[Row],
>   fieldsNames: Array[String],
>   typeInfo: Array[TypeInformation[_]]): KafkaTableSource = {
> if (deserializationSchema != null) {
>   new Kafka09TableSource(topic, properties, deserializationSchema, 
> fieldsNames, typeInfo)
> } else {
>   new Kafka09JsonTableSource(topic, properties, fieldsNames, typeInfo)
> }
>   }
> {code}
> Because of the class KafkaTableSource modifier is default,we cann't define 
> this function result type with KafkaTableSource ,we must give the specific 
> type.
> if some other kafka source extends KafkaTableSource ,and we don't sure which 
> subclass of KafkaTableSource should be use,how can we specific the type?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2678: [FLINK-4879] [Kafka-Connector] class KafkaTableSource sho...

2016-10-23 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2678
  
+1 this looks fine to me, no problem with making it public.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4879) class KafkaTableSource should be public just like KafkaTableSink

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600680#comment-15600680
 ] 

ASF GitHub Bot commented on FLINK-4879:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2678
  
+1 to merge this. I think it's fine to make `KafkaTableSource` public, just 
like `FlinkKafkaConsumerBase`.


> class KafkaTableSource should be public just like KafkaTableSink
> 
>
> Key: FLINK-4879
> URL: https://issues.apache.org/jira/browse/FLINK-4879
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Table API & SQL
>Affects Versions: 1.1.1, 1.1.3
>Reporter: yuemeng
>Priority: Minor
> Fix For: 1.1.4
>
> Attachments: 0001-class-KafkaTableSource-should-be-public.patch
>
>
> *class KafkaTableSource should be public just like KafkaTableSink,by 
> default,it's modifier is default ,and we cann't access out of it's package*,
> for example:
>  {code}
> def createKafkaTableSource(
>   topic: String,
>   properties: Properties,
>   deserializationSchema: DeserializationSchema[Row],
>   fieldsNames: Array[String],
>   typeInfo: Array[TypeInformation[_]]): KafkaTableSource = {
> if (deserializationSchema != null) {
>   new Kafka09TableSource(topic, properties, deserializationSchema, 
> fieldsNames, typeInfo)
> } else {
>   new Kafka09JsonTableSource(topic, properties, fieldsNames, typeInfo)
> }
>   }
> {code}
> Because of the class KafkaTableSource modifier is default,we cann't define 
> this function result type with KafkaTableSource ,we must give the specific 
> type.
> if some other kafka source extends KafkaTableSource ,and we don't sure which 
> subclass of KafkaTableSource should be use,how can we specific the type?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2678: [FLINK-4879] [Kafka-Connector] class KafkaTableSource sho...

2016-10-23 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2678
  
+1 to merge this. I think it's fine to make `KafkaTableSource` public, just 
like `FlinkKafkaConsumerBase`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4879) class KafkaTableSource should be public just like KafkaTableSink

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600634#comment-15600634
 ] 

ASF GitHub Bot commented on FLINK-4879:
---

Github user hzyuemeng1 commented on the issue:

https://github.com/apache/flink/pull/2678
  
@wuchong can u help me to inform someone to merge this pr,thanks 


> class KafkaTableSource should be public just like KafkaTableSink
> 
>
> Key: FLINK-4879
> URL: https://issues.apache.org/jira/browse/FLINK-4879
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Table API & SQL
>Affects Versions: 1.1.1, 1.1.3
>Reporter: yuemeng
>Priority: Minor
> Fix For: 1.1.4
>
> Attachments: 0001-class-KafkaTableSource-should-be-public.patch
>
>
> *class KafkaTableSource should be public just like KafkaTableSink,by 
> default,it's modifier is default ,and we cann't access out of it's package*,
> for example:
>  {code}
> def createKafkaTableSource(
>   topic: String,
>   properties: Properties,
>   deserializationSchema: DeserializationSchema[Row],
>   fieldsNames: Array[String],
>   typeInfo: Array[TypeInformation[_]]): KafkaTableSource = {
> if (deserializationSchema != null) {
>   new Kafka09TableSource(topic, properties, deserializationSchema, 
> fieldsNames, typeInfo)
> } else {
>   new Kafka09JsonTableSource(topic, properties, fieldsNames, typeInfo)
> }
>   }
> {code}
> Because of the class KafkaTableSource modifier is default,we cann't define 
> this function result type with KafkaTableSource ,we must give the specific 
> type.
> if some other kafka source extends KafkaTableSource ,and we don't sure which 
> subclass of KafkaTableSource should be use,how can we specific the type?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2678: [FLINK-4879] [Kafka-Connector] class KafkaTableSource sho...

2016-10-23 Thread hzyuemeng1
Github user hzyuemeng1 commented on the issue:

https://github.com/apache/flink/pull/2678
  
@wuchong can u help me to inform someone to merge this pr,thanks 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4879) class KafkaTableSource should be public just like KafkaTableSink

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600615#comment-15600615
 ] 

ASF GitHub Bot commented on FLINK-4879:
---

Github user hzyuemeng1 commented on the issue:

https://github.com/apache/flink/pull/2678
  
@fhueske @rmetzger ,thanks all for review this pr,I am very interesting in 
Table SQL,can anyone merge this pr?


> class KafkaTableSource should be public just like KafkaTableSink
> 
>
> Key: FLINK-4879
> URL: https://issues.apache.org/jira/browse/FLINK-4879
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Table API & SQL
>Affects Versions: 1.1.1, 1.1.3
>Reporter: yuemeng
>Priority: Minor
> Fix For: 1.1.4
>
> Attachments: 0001-class-KafkaTableSource-should-be-public.patch
>
>
> *class KafkaTableSource should be public just like KafkaTableSink,by 
> default,it's modifier is default ,and we cann't access out of it's package*,
> for example:
>  {code}
> def createKafkaTableSource(
>   topic: String,
>   properties: Properties,
>   deserializationSchema: DeserializationSchema[Row],
>   fieldsNames: Array[String],
>   typeInfo: Array[TypeInformation[_]]): KafkaTableSource = {
> if (deserializationSchema != null) {
>   new Kafka09TableSource(topic, properties, deserializationSchema, 
> fieldsNames, typeInfo)
> } else {
>   new Kafka09JsonTableSource(topic, properties, fieldsNames, typeInfo)
> }
>   }
> {code}
> Because of the class KafkaTableSource modifier is default,we cann't define 
> this function result type with KafkaTableSource ,we must give the specific 
> type.
> if some other kafka source extends KafkaTableSource ,and we don't sure which 
> subclass of KafkaTableSource should be use,how can we specific the type?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2678: [FLINK-4879] [Kafka-Connector] class KafkaTableSource sho...

2016-10-23 Thread hzyuemeng1
Github user hzyuemeng1 commented on the issue:

https://github.com/apache/flink/pull/2678
  
@fhueske @rmetzger ,thanks all for review this pr,I am very interesting in 
Table SQL,can anyone merge this pr?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4800) Introduce the TimestampedFileInputSplit for Continuous File Processing

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600258#comment-15600258
 ] 

ASF GitHub Bot commented on FLINK-4800:
---

Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/2618
  
Hi @mxm. I already have a rebased version of this PR. I am just not 
committing it because all 
the commits are squashed and it will mess up the review. Let me know when 
you want me to push 
it.


> Introduce the TimestampedFileInputSplit for Continuous File Processing
> --
>
> Key: FLINK-4800
> URL: https://issues.apache.org/jira/browse/FLINK-4800
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Minor
>
> Introduce the TimestampedFileInputSplit which extends the class 
> FileInputSplit and also contains:
> i) the modification time of the file the split belongs to and also, and
> ii) when checkpointing, the point the reader is currently reading from.
> The latter will be useful for rescaling. With this addition, the 
> ContinuousFileMonitoringFunction sends TimestampedFileInputSplits 
> to the Readers, and the Readers' state now contain only 
> TimestampedFileInputSplits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4800) Introduce the TimestampedFileInputSplit for Continuous File Processing

2016-10-23 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-4800:
--
Summary: Introduce the TimestampedFileInputSplit for Continuous File 
Processing  (was: Introduce the TimestampedFileInputSplit in the Continuous 
File Processing)

> Introduce the TimestampedFileInputSplit for Continuous File Processing
> --
>
> Key: FLINK-4800
> URL: https://issues.apache.org/jira/browse/FLINK-4800
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Minor
>
> Introduce the TimestampedFileInputSplit which extends the class 
> FileInputSplit and also contains:
> i) the modification time of the file the split belongs to and also, and
> ii) when checkpointing, the point the reader is currently reading from.
> The latter will be useful for rescaling. With this addition, the 
> ContinuousFileMonitoringFunction sends TimestampedFileInputSplits 
> to the Readers, and the Readers' state now contain only 
> TimestampedFileInputSplits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4800) Introduce the TimestampedFileInputSplit in the Continuous File Processing

2016-10-23 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-4800:
--
Summary: Introduce the TimestampedFileInputSplit in the Continuous File 
Processing  (was: Refactor the ContinuousFileMonitoringFunction code and the 
related tests.)

> Introduce the TimestampedFileInputSplit in the Continuous File Processing
> -
>
> Key: FLINK-4800
> URL: https://issues.apache.org/jira/browse/FLINK-4800
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Minor
>
> Introduce the TimestampedFileInputSplit which extends the class 
> FileInputSplit and also contains:
> i) the modification time of the file the split belongs to and also, and
> ii) when checkpointing, the point the reader is currently reading from.
> The latter will be useful for rescaling. With this addition, the 
> ContinuousFileMonitoringFunction sends TimestampedFileInputSplits 
> to the Readers, and the Readers' state now contain only 
> TimestampedFileInputSplits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4800) Refactor the ContinuousFileMonitoringFunction code and the related tests.

2016-10-23 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-4800:
--
Description: 
Introduce the TimestampedFileInputSplit which extends the class FileInputSplit 
and also contains:
i) the modification time of the file the split belongs to and also, and
ii) when checkpointing, the point the reader is currently reading from.

The latter will be useful for rescaling. With this addition, the 
ContinuousFileMonitoringFunction sends TimestampedFileInputSplits 
to the Readers, and the Readers' state now contain only 
TimestampedFileInputSplits.



  was:
Currently the code in the FileMonitoringFunction can be simplified.
The same holds for the test code. 
This is the goal of this issue.


> Refactor the ContinuousFileMonitoringFunction code and the related tests.
> -
>
> Key: FLINK-4800
> URL: https://issues.apache.org/jira/browse/FLINK-4800
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Minor
>
> Introduce the TimestampedFileInputSplit which extends the class 
> FileInputSplit and also contains:
> i) the modification time of the file the split belongs to and also, and
> ii) when checkpointing, the point the reader is currently reading from.
> The latter will be useful for rescaling. With this addition, the 
> ContinuousFileMonitoringFunction sends TimestampedFileInputSplits 
> to the Readers, and the Readers' state now contain only 
> TimestampedFileInputSplits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4866) Make Trigger.clear() Abstract to Enforce Implementation

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15600203#comment-15600203
 ] 

ASF GitHub Bot commented on FLINK-4866:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2672
  
Thanks for your work.   I'll be merging this tomorrow.


> Make Trigger.clear() Abstract to Enforce Implementation
> ---
>
> Key: FLINK-4866
> URL: https://issues.apache.org/jira/browse/FLINK-4866
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Jark Wu
>
> If the method is not abstract implementors of custom triggers will not 
> realise that it could be necessary and they will likely not clean up their 
> state/timers properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2672: [FLINK-4866] [streaming] Make Trigger.clear() Abstract to...

2016-10-23 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2672
  
Thanks for your work. 👍  I'll be merging this tomorrow.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4789) Avoid Kafka partition discovery on restore and share consumer instance for discovery and data consumption

2016-10-23 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599874#comment-15599874
 ] 

Robert Metzger commented on FLINK-4789:
---

This comment is wrong and needs to be updated: 
https://github.com/apache/flink/blob/master/flink-streaming-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer09.java#L63

> Avoid Kafka partition discovery on restore and share consumer instance for 
> discovery and data consumption
> -
>
> Key: FLINK-4789
> URL: https://issues.apache.org/jira/browse/FLINK-4789
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>
> As part of FLINK-4379, the Kafka partition discovery was moved from the 
> Constructor to the open() method. This is in general a good change, as 
> outlined in FLINK-4155, as it allows us to detect new partitions and topics 
> based on regex on the fly.
> However, currently the partitions are discovered on restore as well. 
> Also, the {{FlinkKafkaConsumer09.getKafkaPartitions()}} is creating a 
> separate {{KafkaConsumer}} just for the partition discovery.
> Since the partition discovery happens on the task managers now, we can use 
> the regular {{KafkaConsumer}} instance, which is used for data retrieval as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2684: Add EvaluateDataSet Operation for LabeledVector - ...

2016-10-23 Thread tfournier314
GitHub user tfournier314 opened a pull request:

https://github.com/apache/flink/pull/2684

Add EvaluateDataSet Operation for LabeledVector - This closes #4865

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tfournier314/flink AddEvaluateDataSetOperation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2684.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2684


commit 5dfa434c083c4cdc7ba0e9a2ae2aaff121ef18a7
Author: Thomas FOURNIER 
Date:   2016-10-23T14:04:14Z

Add EvaluateDataSet Operation for LabeledVector - This closes #4865




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2668: Add EvaluateDataSetOperation for LabeledVector. Th...

2016-10-23 Thread tfournier314
Github user tfournier314 closed the pull request at:

https://github.com/apache/flink/pull/2668


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2668: Add EvaluateDataSetOperation for LabeledVector. This clos...

2016-10-23 Thread tfournier314
Github user tfournier314 commented on the issue:

https://github.com/apache/flink/pull/2668
  
I'm using IntelliJ and I can't remove scala doc and use java doc instead


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4847) Let RpcEndpoint.start/shutDown throw exceptions

2016-10-23 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-4847.

Resolution: Fixed

Fixed via f63f972932e152822603d7a8bb8bf74bf3f05441

> Let RpcEndpoint.start/shutDown throw exceptions
> ---
>
> Key: FLINK-4847
> URL: https://issues.apache.org/jira/browse/FLINK-4847
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The {{RpcEndpoint.start}} and {{RpcEndpoint.shutDown}} methods should be 
> allowed to throw exceptions if things go wrong. Otherwise, exceptions will be 
> given to a callback which handles them later, even though we know that we can 
> fail the components right away (as it is the case for the {{TaskExectuor}}, 
> for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4851) Add FatalErrorHandler and MetricRegistry to ResourceManager

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599397#comment-15599397
 ] 

ASF GitHub Bot commented on FLINK-4851:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2655


> Add FatalErrorHandler and MetricRegistry to ResourceManager
> ---
>
> Key: FLINK-4851
> URL: https://issues.apache.org/jira/browse/FLINK-4851
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The {{ResourceManager}} currently does not contain a {{FatalErrorHandler}}. 
> In order to harmonize the fatal error handling across all components, we 
> should introduce a {{FatalErrorHandler}}, which handles fatal errors. 
> Additionally, we should also give a {{MetricRegistry}} to the 
> {{ResourceManager}} so that it can report metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2651: [FLINK-4847] Let RpcEndpoint.start/shutDown throw excepti...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2651
  
Merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4847) Let RpcEndpoint.start/shutDown throw exceptions

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599400#comment-15599400
 ] 

ASF GitHub Bot commented on FLINK-4847:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2651


> Let RpcEndpoint.start/shutDown throw exceptions
> ---
>
> Key: FLINK-4847
> URL: https://issues.apache.org/jira/browse/FLINK-4847
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The {{RpcEndpoint.start}} and {{RpcEndpoint.shutDown}} methods should be 
> allowed to throw exceptions if things go wrong. Otherwise, exceptions will be 
> given to a callback which handles them later, even though we know that we can 
> fail the components right away (as it is the case for the {{TaskExectuor}}, 
> for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4851) Add FatalErrorHandler and MetricRegistry to ResourceManager

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599396#comment-15599396
 ] 

ASF GitHub Bot commented on FLINK-4851:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2655
  
Merged


> Add FatalErrorHandler and MetricRegistry to ResourceManager
> ---
>
> Key: FLINK-4851
> URL: https://issues.apache.org/jira/browse/FLINK-4851
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The {{ResourceManager}} currently does not contain a {{FatalErrorHandler}}. 
> In order to harmonize the fatal error handling across all components, we 
> should introduce a {{FatalErrorHandler}}, which handles fatal errors. 
> Additionally, we should also give a {{MetricRegistry}} to the 
> {{ResourceManager}} so that it can report metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4847) Let RpcEndpoint.start/shutDown throw exceptions

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599399#comment-15599399
 ] 

ASF GitHub Bot commented on FLINK-4847:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2651
  
Merged


> Let RpcEndpoint.start/shutDown throw exceptions
> ---
>
> Key: FLINK-4847
> URL: https://issues.apache.org/jira/browse/FLINK-4847
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The {{RpcEndpoint.start}} and {{RpcEndpoint.shutDown}} methods should be 
> allowed to throw exceptions if things go wrong. Otherwise, exceptions will be 
> given to a callback which handles them later, even though we know that we can 
> fail the components right away (as it is the case for the {{TaskExectuor}}, 
> for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4851) Add FatalErrorHandler and MetricRegistry to ResourceManager

2016-10-23 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-4851.

Resolution: Fixed

Fixed via f38bf4484df3e022f389357059ca984c5e3f76a6

> Add FatalErrorHandler and MetricRegistry to ResourceManager
> ---
>
> Key: FLINK-4851
> URL: https://issues.apache.org/jira/browse/FLINK-4851
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The {{ResourceManager}} currently does not contain a {{FatalErrorHandler}}. 
> In order to harmonize the fatal error handling across all components, we 
> should introduce a {{FatalErrorHandler}}, which handles fatal errors. 
> Additionally, we should also give a {{MetricRegistry}} to the 
> {{ResourceManager}} so that it can report metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2651: [FLINK-4847] Let RpcEndpoint.start/shutDown throw ...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2651


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2655: [FLINK-4851] [rm] Introduce FatalErrorHandler and ...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2655


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4853) Clean up JobManager registration at the ResourceManager

2016-10-23 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-4853.

Resolution: Fixed

Fixed via 0e965ae3e00454575a8920fce7b97842fdb9e3a9

> Clean up JobManager registration at the ResourceManager
> ---
>
> Key: FLINK-4853
> URL: https://issues.apache.org/jira/browse/FLINK-4853
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The current {{JobManager}} registration at the {{ResourceManager}} blocks 
> threads in the {{RpcService.execute}} pool. This is not ideal and can be 
> avoided by not waiting on a {{Future}} in this call.
> I propose to encapsulate the leader id retrieval operation in a distinct 
> service so that it can be separated from the {{ResourceManager}}. This will 
> reduce the complexity of the {{ResourceManager}} and make the individual 
> components easier to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2655: [FLINK-4851] [rm] Introduce FatalErrorHandler and MetricR...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2655
  
Merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4853) Clean up JobManager registration at the ResourceManager

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599393#comment-15599393
 ] 

ASF GitHub Bot commented on FLINK-4853:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2657


> Clean up JobManager registration at the ResourceManager
> ---
>
> Key: FLINK-4853
> URL: https://issues.apache.org/jira/browse/FLINK-4853
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The current {{JobManager}} registration at the {{ResourceManager}} blocks 
> threads in the {{RpcService.execute}} pool. This is not ideal and can be 
> avoided by not waiting on a {{Future}} in this call.
> I propose to encapsulate the leader id retrieval operation in a distinct 
> service so that it can be separated from the {{ResourceManager}}. This will 
> reduce the complexity of the {{ResourceManager}} and make the individual 
> components easier to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4853) Clean up JobManager registration at the ResourceManager

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599392#comment-15599392
 ] 

ASF GitHub Bot commented on FLINK-4853:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2657
  
Merged.


> Clean up JobManager registration at the ResourceManager
> ---
>
> Key: FLINK-4853
> URL: https://issues.apache.org/jira/browse/FLINK-4853
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The current {{JobManager}} registration at the {{ResourceManager}} blocks 
> threads in the {{RpcService.execute}} pool. This is not ideal and can be 
> avoided by not waiting on a {{Future}} in this call.
> I propose to encapsulate the leader id retrieval operation in a distinct 
> service so that it can be separated from the {{ResourceManager}}. This will 
> reduce the complexity of the {{ResourceManager}} and make the individual 
> components easier to test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4871) Add memory calculation for TaskManagers and forward MetricRegistry

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599388#comment-15599388
 ] 

ASF GitHub Bot commented on FLINK-4871:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2669
  
Merged


> Add memory calculation for TaskManagers and forward MetricRegistry
> --
>
> Key: FLINK-4871
> URL: https://issues.apache.org/jira/browse/FLINK-4871
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> Add automatic memory calculation for {{TaskManagers}} executed by the 
> {{MiniCluster}}. 
> Additionally, change the {{TaskManagerRunner}} to accept a given 
> {{MetricRegistry}} so that the one instantiated by the {{MiniCluster}} is 
> used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4871) Add memory calculation for TaskManagers and forward MetricRegistry

2016-10-23 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-4871.

Resolution: Fixed

Fixed via 80b6c2a015fa12bd73dfe8ccdb3930de0396e623

> Add memory calculation for TaskManagers and forward MetricRegistry
> --
>
> Key: FLINK-4871
> URL: https://issues.apache.org/jira/browse/FLINK-4871
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> Add automatic memory calculation for {{TaskManagers}} executed by the 
> {{MiniCluster}}. 
> Additionally, change the {{TaskManagerRunner}} to accept a given 
> {{MetricRegistry}} so that the one instantiated by the {{MiniCluster}} is 
> used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2657: [FLINK-4853] [rm] Harden job manager registration ...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2657


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2657: [FLINK-4853] [rm] Harden job manager registration at the ...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2657
  
Merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4871) Add memory calculation for TaskManagers and forward MetricRegistry

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599389#comment-15599389
 ] 

ASF GitHub Bot commented on FLINK-4871:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2669


> Add memory calculation for TaskManagers and forward MetricRegistry
> --
>
> Key: FLINK-4871
> URL: https://issues.apache.org/jira/browse/FLINK-4871
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> Add automatic memory calculation for {{TaskManagers}} executed by the 
> {{MiniCluster}}. 
> Additionally, change the {{TaskManagerRunner}} to accept a given 
> {{MetricRegistry}} so that the one instantiated by the {{MiniCluster}} is 
> used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2669: [FLINK-4871] [mini cluster] Add memory calculation...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2669


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2669: [FLINK-4871] [mini cluster] Add memory calculation for Ta...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2669
  
Merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4882) Cleanup throws exception clause in HighAvailabilityServices

2016-10-23 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-4882.

Resolution: Fixed

Fixed via 0de5689632bd1f8eac6e436959d80d31df9e5ef9

> Cleanup throws exception clause in HighAvailabilityServices
> ---
>
> Key: FLINK-4882
> URL: https://issues.apache.org/jira/browse/FLINK-4882
> Project: Flink
>  Issue Type: Improvement
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>  Labels: flip-6
>
> The {{HighAvailabilityServices}} interfaces defines methods with throws 
> exception clauses which are not really needed. We should remove them to 
> correct the interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2679: [FLINK-4882] [flip-6] Remove exceptions from HighAvailabi...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2679
  
Merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4882) Cleanup throws exception clause in HighAvailabilityServices

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599376#comment-15599376
 ] 

ASF GitHub Bot commented on FLINK-4882:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2679


> Cleanup throws exception clause in HighAvailabilityServices
> ---
>
> Key: FLINK-4882
> URL: https://issues.apache.org/jira/browse/FLINK-4882
> Project: Flink
>  Issue Type: Improvement
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>  Labels: flip-6
>
> The {{HighAvailabilityServices}} interfaces defines methods with throws 
> exception clauses which are not really needed. We should remove them to 
> correct the interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4882) Cleanup throws exception clause in HighAvailabilityServices

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599375#comment-15599375
 ] 

ASF GitHub Bot commented on FLINK-4882:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2679
  
Merged


> Cleanup throws exception clause in HighAvailabilityServices
> ---
>
> Key: FLINK-4882
> URL: https://issues.apache.org/jira/browse/FLINK-4882
> Project: Flink
>  Issue Type: Improvement
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>  Labels: flip-6
>
> The {{HighAvailabilityServices}} interfaces defines methods with throws 
> exception clauses which are not really needed. We should remove them to 
> correct the interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2679: [FLINK-4882] [flip-6] Remove exceptions from HighA...

2016-10-23 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2679


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3035) Redis as State Backend

2016-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599239#comment-15599239
 ] 

ASF GitHub Bot commented on FLINK-3035:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/1617
  
@fhueske I'm in favour of closing both, yes.


> Redis as State Backend
> --
>
> Key: FLINK-3035
> URL: https://issues.apache.org/jira/browse/FLINK-3035
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Reporter: Matthias J. Sax
>Assignee: Subhobrata Dey
>Priority: Minor
>
> Add Redis as a state backend for distributed snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #1617: [FLINK-3035] Redis as State Backend

2016-10-23 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/1617
  
@fhueske I'm in favour of closing both, yes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---