[jira] [Commented] (FLINK-4152) TaskManager registration exponential backoff doesn't work

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390557#comment-15390557
 ] 

ASF GitHub Bot commented on FLINK-4152:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2257
  
The problem that maven downloads the flink-runtime_2.10-tests.jar from the 
snapshot repository instead of using the one from the local repository still 
remains.


> TaskManager registration exponential backoff doesn't work
> -
>
> Key: FLINK-4152
> URL: https://issues.apache.org/jira/browse/FLINK-4152
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, TaskManager, YARN Client
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
> Attachments: logs.tgz
>
>
> While testing Flink 1.1 I've found that the TaskManagers are logging many 
> messages when registering at the JobManager.
> This is the log file: 
> https://gist.github.com/rmetzger/0cebe0419cdef4507b1e8a42e33ef294
> Its logging more than 3000 messages in less than a minute. I don't think that 
> this is the expected behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2257: [FLINK-4152] Allow re-registration of TMs at resource man...

2016-07-22 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2257
  
The problem that maven downloads the flink-runtime_2.10-tests.jar from the 
snapshot repository instead of using the one from the local repository still 
remains.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390505#comment-15390505
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2231
  
@radekg Thank you for the quick fix. I hope to find time over the weekend 
to test + review this, if not than early next week :)


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2231: [FLINK-4035] Bump Kafka producer in Kafka sink to Kafka 0...

2016-07-22 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2231
  
@radekg Thank you for the quick fix. I hope to find time over the weekend 
to test + review this, if not than early next week :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4228) RocksDB semi-async snapshot to S3AFileSystem fails

2016-07-22 Thread Cliff Resnick (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390493#comment-15390493
 ] 

Cliff Resnick commented on FLINK-4228:
--

I added a pull request for this. I included the fink-yarn recursive staging 
upload.

https://github.com/apache/flink/pull/2288

> RocksDB semi-async snapshot to S3AFileSystem fails
> --
>
> Key: FLINK-4228
> URL: https://issues.apache.org/jira/browse/FLINK-4228
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>
> Using the {{RocksDBStateBackend}} with semi-async snapshots (current default) 
> leads to an Exception when uploading the snapshot to S3 when using the 
> {{S3AFileSystem}}.
> {code}
> AsynchronousException{com.amazonaws.AmazonClientException: Unable to 
> calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointThread.run(StreamTask.java:870)
> Caused by: com.amazonaws.AmazonClientException: Unable to calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1298)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:108)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:100)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.upload(UploadMonitor.java:192)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:150)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
>   ... 9 more
> {code}
> Running with S3NFileSystem, the error does not occur. The problem might be 
> due to {{HDFSCopyToLocal}} assuming that sub-folders are going to be created 
> automatically. We might need to manually create folders and copy only actual 
> files for {{S3AFileSystem}}. More investigation is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2288: Feature/s3 a fix

2016-07-22 Thread cresny
GitHub user cresny opened a pull request:

https://github.com/apache/flink/pull/2288

Feature/s3 a fix

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cresny/flink feature/s3A-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2288.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2288


commit 830cda4fe5684585fa8c9b32990ea3e1ae0bf8f3
Author: Clifford Resnick 
Date:   2016-07-23T02:17:02Z

fix for FLINK-4228

commit 7b1ae8aedd1d9f9c3da2a046e688d561eb9e3920
Author: Clifford Resnick 
Date:   2016-07-23T02:38:13Z

add recursive staging upload to support S3A on YARN

commit 283c1e564a4b90eb7470e00210dede9764502ab7
Author: Clifford Resnick 
Date:   2016-07-23T02:53:46Z

Revert "fix for FLINK-4228"

This reverts commit 830cda4fe5684585fa8c9b32990ea3e1ae0bf8f3.

commit c268b433ff0e53374039c2f20fdda0f808bde5d9
Author: Clifford Resnick 
Date:   2016-07-23T02:54:32Z

Revert "add recursive staging upload to support S3A on YARN"

This reverts commit 7b1ae8aedd1d9f9c3da2a046e688d561eb9e3920.

commit 26df83f1e876ab32c8a03bd37027ad512d704809
Author: Clifford Resnick 
Date:   2016-07-23T03:04:38Z

fix for FLINK-4228

commit 081526cfee4981eb033218c730927c0b352096e0
Author: Clifford Resnick 
Date:   2016-07-23T03:05:07Z

add recursive staging upload to support S3A on YARN




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Issue Comment Deleted] (FLINK-3480) Add hash-based strategy for ReduceFunction

2016-07-22 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-3480:
--
Comment: was deleted

(was: Today I bumped into the performance discrepancy where a forwarding ship 
strategy can hurt performance since we can only do a sorted reduce whereas with 
a partition hash we can use the new hash-combiner.

What would be the spilling strategy for a hash-reducer and would this look much 
different from using the hash-combiner followed by the sort-reducer?)

> Add hash-based strategy for ReduceFunction
> --
>
> Key: FLINK-3480
> URL: https://issues.apache.org/jira/browse/FLINK-3480
> Project: Flink
>  Issue Type: Sub-task
>  Components: Local Runtime
>Reporter: Fabian Hueske
>
> This issue is related to FLINK-3477. 
> While FLINK-3477 proposes to add hash-based combine strategy for 
> ReduceFunction, this issue aims to add a hash-based strategy for the final 
> aggregation.
> This will need again a special hash-table aggregation which allows for 
> in-place updates and append updates. However, it also needs to support 
> spilling to disk in case of too tight memory budgets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2244: [FLINK-3874] Add a Kafka TableSink with JSON serializatio...

2016-07-22 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2244
  
Fixed build.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3874) Add a Kafka TableSink with JSON serialization

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390291#comment-15390291
 ] 

ASF GitHub Bot commented on FLINK-3874:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2244
  
Fixed build.


> Add a Kafka TableSink with JSON serialization
> -
>
> Key: FLINK-3874
> URL: https://issues.apache.org/jira/browse/FLINK-3874
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>Priority: Minor
>
> Add a TableSink that writes JSON serialized data to Kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390225#comment-15390225
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user radekg commented on the issue:

https://github.com/apache/flink/pull/2231
  
Travis is going to run.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2231: [FLINK-4035] Bump Kafka producer in Kafka sink to Kafka 0...

2016-07-22 Thread radekg
Github user radekg commented on the issue:

https://github.com/apache/flink/pull/2231
  
Travis is going to run.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390177#comment-15390177
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user radekg commented on the issue:

https://github.com/apache/flink/pull/2231
  
Thanks, running `verify` again.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2231: [FLINK-4035] Bump Kafka producer in Kafka sink to Kafka 0...

2016-07-22 Thread radekg
Github user radekg commented on the issue:

https://github.com/apache/flink/pull/2231
  
Thanks, running `verify` again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3480) Add hash-based strategy for ReduceFunction

2016-07-22 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390164#comment-15390164
 ] 

Greg Hogan commented on FLINK-3480:
---

Today I bumped into the performance discrepancy where a forwarding ship 
strategy can hurt performance since we can only do a sorted reduce whereas with 
a partition hash we can use the new hash-combiner.

What would be the spilling strategy for a hash-reducer and would this look much 
different from using the hash-combiner followed by the sort-reducer?

> Add hash-based strategy for ReduceFunction
> --
>
> Key: FLINK-3480
> URL: https://issues.apache.org/jira/browse/FLINK-3480
> Project: Flink
>  Issue Type: Sub-task
>  Components: Local Runtime
>Reporter: Fabian Hueske
>
> This issue is related to FLINK-3477. 
> While FLINK-3477 proposes to add hash-based combine strategy for 
> ReduceFunction, this issue aims to add a hash-based strategy for the final 
> aggregation.
> This will need again a special hash-table aggregation which allows for 
> in-place updates and append updates. However, it also needs to support 
> spilling to disk in case of too tight memory budgets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3904) GlobalConfiguration doesn't ensure config has been loaded

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390158#comment-15390158
 ] 

ASF GitHub Bot commented on FLINK-3904:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2123
  
Brief summary of changes:

- fail if config couldn't be loaded
- make globalconfiguration non-global and remove static SINGLETON
- remove duplicate api methods
- remove undocumented XML loading feature
- generate yaml conf in tests instead of xml conf
- only load one config file instead of all xml or yaml files 
(flink-conf.yaml)
- fix test cases
- add test cases


> GlobalConfiguration doesn't ensure config has been loaded
> -
>
> Key: FLINK-3904
> URL: https://issues.apache.org/jira/browse/FLINK-3904
> Project: Flink
>  Issue Type: Improvement
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> By default, {{GlobalConfiguration}} returns an empty Configuration. Instead, 
> a call to {{get()}} should fail if the config hasn't been loaded explicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2123: [FLINK-3904] GlobalConfiguration doesn't ensure config ha...

2016-07-22 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2123
  
Brief summary of changes:

- fail if config couldn't be loaded
- make globalconfiguration non-global and remove static SINGLETON
- remove duplicate api methods
- remove undocumented XML loading feature
- generate yaml conf in tests instead of xml conf
- only load one config file instead of all xml or yaml files 
(flink-conf.yaml)
- fix test cases
- add test cases


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3904) GlobalConfiguration doesn't ensure config has been loaded

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390153#comment-15390153
 ] 

ASF GitHub Bot commented on FLINK-3904:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2123
  
That's odd. I was working on exactly these changes and have just pushed 
them (without seeing your comment before).


> GlobalConfiguration doesn't ensure config has been loaded
> -
>
> Key: FLINK-3904
> URL: https://issues.apache.org/jira/browse/FLINK-3904
> Project: Flink
>  Issue Type: Improvement
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> By default, {{GlobalConfiguration}} returns an empty Configuration. Instead, 
> a call to {{get()}} should fail if the config hasn't been loaded explicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2123: [FLINK-3904] GlobalConfiguration doesn't ensure config ha...

2016-07-22 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2123
  
That's odd. I was working on exactly these changes and have just pushed 
them (without seeing your comment before).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4251) Add possiblity for the RMQ Streaming Sink to customize the queue

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390142#comment-15390142
 ] 

ASF GitHub Bot commented on FLINK-4251:
---

Github user PhilippGrulich commented on the issue:

https://github.com/apache/flink/pull/2281
  
@kkamkou Thank you for your review.
I changed also the access levels for the connection and schema field. So 
the RMQSink is more similar to the RMQSource.


> Add possiblity for the RMQ Streaming Sink to customize the queue
> 
>
> Key: FLINK-4251
> URL: https://issues.apache.org/jira/browse/FLINK-4251
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Philipp Grulich
>Priority: Minor
>
> This patch adds the possibilty for the user of the RabbitMQ
> Streaming Sink to customize the queue which is used. 
> This adopts the behavior of [FLINK-4025] for the sink.
> The commit doesn't change the actual behaviour but makes it
> possible for users to override the `setupQueue`
> method and customize their implementation. This was only possible for the 
> RMQSource before. The Sink and the Source offer now both the same 
> functionality, so this should increase usability. 
> [FLINK-4025] = https://issues.apache.org/jira/browse/FLINK-4025



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2281: RMQ Sink: Possibility to customize queue config [FLINK-42...

2016-07-22 Thread PhilippGrulich
Github user PhilippGrulich commented on the issue:

https://github.com/apache/flink/pull/2281
  
@kkamkou Thank you for your review.
I changed also the access levels for the connection and schema field. So 
the RMQSink is more similar to the RMQSource.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4259) Unclosed FSDataOutputStream in FileCache#copy()

2016-07-22 Thread Ted Yu (JIRA)
Ted Yu created FLINK-4259:
-

 Summary: Unclosed FSDataOutputStream in FileCache#copy()
 Key: FLINK-4259
 URL: https://issues.apache.org/jira/browse/FLINK-4259
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
try {
  FSDataOutputStream lfsOutput = tFS.create(targetPath, false);
  FSDataInputStream fsInput = sFS.open(sourcePath);
  IOUtils.copyBytes(fsInput, lfsOutput);
{code}
The FSDataOutputStream lfsOutput should be closed upon exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390130#comment-15390130
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2231
  
The errors are due to some of the changes to `AbstractFetcher` in 
https://github.com/apache/flink/commit/41f58182289226850b23c61a32f01223485d4775.
 Some of the Kafka 0.9 connector code that has changed accordingly, so you'll 
probably need to reflect those changes in the Kafka 0.10 code too.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2231: [FLINK-4035] Bump Kafka producer in Kafka sink to Kafka 0...

2016-07-22 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2231
  
The errors are due to some of the changes to `AbstractFetcher` in 
https://github.com/apache/flink/commit/41f58182289226850b23c61a32f01223485d4775.
 Some of the Kafka 0.9 connector code that has changed accordingly, so you'll 
probably need to reflect those changes in the Kafka 0.10 code too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4258) Potential null pointer dereference in SavepointCoordinator#onFullyAcknowledgedCheckpoint

2016-07-22 Thread Ted Yu (JIRA)
Ted Yu created FLINK-4258:
-

 Summary: Potential null pointer dereference in 
SavepointCoordinator#onFullyAcknowledgedCheckpoint
 Key: FLINK-4258
 URL: https://issues.apache.org/jira/browse/FLINK-4258
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


{code}
if (promise == null) {
  LOG.info("Pending savepoint with ID " + checkpoint.getCheckpointID() + "  
has been " +
  "removed before receiving acknowledgment.");
}

// Sanity check
if (promise.isCompleted()) {
  throw new IllegalStateException("Savepoint promise completed");
{code}
Looks like a return statement is missing in the first if block above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2231: [FLINK-4035] Bump Kafka producer in Kafka sink to Kafka 0...

2016-07-22 Thread radekg
Github user radekg commented on the issue:

https://github.com/apache/flink/pull/2231
  
Merged with `upstream/master` and I'm getting this when running `mvn clean 
verify`:

```
[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[30,69]
 cannot find symbol
  symbol:   class DefaultKafkaMetricAccumulator
  location: package 
org.apache.flink.streaming.connectors.kafka.internals.metrics
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[105,17]
 constructor AbstractFetcher in class 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher 
cannot be applied to given types;
  required: 
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext,java.util.List,org.apache.flink.util.SerializedValue,org.apache.flink.util.SerializedValue,org.apache.flink.streaming.api.operators.StreamingRuntimeContext,boolean
  found: 
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext,java.util.List,org.apache.flink.util.SerializedValue,org.apache.flink.util.SerializedValue,org.apache.flink.streaming.api.operators.StreamingRuntimeContext
  reason: actual and formal argument lists differ in length
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[192,49]
 cannot find symbol
  symbol:   class DefaultKafkaMetricAccumulator
  location: class 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[193,65]
 cannot find symbol
  symbol:   variable DefaultKafkaMetricAccumulator
  location: class 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
[INFO] 4 errors
[INFO] -
[INFO] 

[INFO] Reactor Summary:
[INFO]
[INFO] force-shading .. SUCCESS [  
1.210 s]
[INFO] flink .. SUCCESS [  
4.416 s]
[INFO] flink-annotations .. SUCCESS [  
1.551 s]
[INFO] flink-shaded-hadoop  SUCCESS [  
0.162 s]
[INFO] flink-shaded-hadoop2 ... SUCCESS [  
6.451 s]
[INFO] flink-shaded-include-yarn-tests  SUCCESS [  
7.929 s]
[INFO] flink-shaded-curator ... SUCCESS [  
0.110 s]
[INFO] flink-shaded-curator-recipes ... SUCCESS [  
0.986 s]
[INFO] flink-shaded-curator-test .. SUCCESS [  
0.200 s]
[INFO] flink-test-utils-parent  SUCCESS [  
0.111 s]
[INFO] flink-test-utils-junit . SUCCESS [  
2.417 s]
[INFO] flink-core . SUCCESS [ 
37.825 s]
[INFO] flink-java . SUCCESS [ 
23.620 s]
[INFO] flink-runtime .. SUCCESS [06:25 
min]
[INFO] flink-optimizer  SUCCESS [ 
12.698 s]
[INFO] flink-clients .. SUCCESS [  
9.795 s]
[INFO] flink-streaming-java ... SUCCESS [ 
43.709 s]
[INFO] flink-test-utils ... SUCCESS [  
9.363 s]
[INFO] flink-scala  SUCCESS [ 
37.639 s]
[INFO] flink-runtime-web .. SUCCESS [ 
19.749 s]
[INFO] flink-examples . SUCCESS [  
1.006 s]
[INFO] flink-examples-batch ... SUCCESS [ 
14.276 s]
[INFO] flink-contrib .. SUCCESS [  
0.104 s]
[INFO] flink-statebackend-rocksdb . SUCCESS [ 
10.938 s]
[INFO] flink-tests  SUCCESS [07:34 
min]
[INFO] 

[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390117#comment-15390117
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user radekg commented on the issue:

https://github.com/apache/flink/pull/2231
  
Merged with `upstream/master` and I'm getting this when running `mvn clean 
verify`:

```
[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[30,69]
 cannot find symbol
  symbol:   class DefaultKafkaMetricAccumulator
  location: package 
org.apache.flink.streaming.connectors.kafka.internals.metrics
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[105,17]
 constructor AbstractFetcher in class 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher 
cannot be applied to given types;
  required: 
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext,java.util.List,org.apache.flink.util.SerializedValue,org.apache.flink.util.SerializedValue,org.apache.flink.streaming.api.operators.StreamingRuntimeContext,boolean
  found: 
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext,java.util.List,org.apache.flink.util.SerializedValue,org.apache.flink.util.SerializedValue,org.apache.flink.streaming.api.operators.StreamingRuntimeContext
  reason: actual and formal argument lists differ in length
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[192,49]
 cannot find symbol
  symbol:   class DefaultKafkaMetricAccumulator
  location: class 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
[ERROR] 
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[193,65]
 cannot find symbol
  symbol:   variable DefaultKafkaMetricAccumulator
  location: class 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
[INFO] 4 errors
[INFO] -
[INFO] 

[INFO] Reactor Summary:
[INFO]
[INFO] force-shading .. SUCCESS [  
1.210 s]
[INFO] flink .. SUCCESS [  
4.416 s]
[INFO] flink-annotations .. SUCCESS [  
1.551 s]
[INFO] flink-shaded-hadoop  SUCCESS [  
0.162 s]
[INFO] flink-shaded-hadoop2 ... SUCCESS [  
6.451 s]
[INFO] flink-shaded-include-yarn-tests  SUCCESS [  
7.929 s]
[INFO] flink-shaded-curator ... SUCCESS [  
0.110 s]
[INFO] flink-shaded-curator-recipes ... SUCCESS [  
0.986 s]
[INFO] flink-shaded-curator-test .. SUCCESS [  
0.200 s]
[INFO] flink-test-utils-parent  SUCCESS [  
0.111 s]
[INFO] flink-test-utils-junit . SUCCESS [  
2.417 s]
[INFO] flink-core . SUCCESS [ 
37.825 s]
[INFO] flink-java . SUCCESS [ 
23.620 s]
[INFO] flink-runtime .. SUCCESS [06:25 
min]
[INFO] flink-optimizer  SUCCESS [ 
12.698 s]
[INFO] flink-clients .. SUCCESS [  
9.795 s]
[INFO] flink-streaming-java ... SUCCESS [ 
43.709 s]
[INFO] flink-test-utils ... SUCCESS [  
9.363 s]
[INFO] flink-scala  SUCCESS [ 
37.639 s]
[INFO] flink-runtime-web .. SUCCESS [ 
19.749 s]
[INFO] flink-examples . SUCCESS [  
1.006 s]
[INFO] flink-examples-batch ... SUCCESS [ 
14.276 s]
[INFO] flink-contrib 

[jira] [Closed] (FLINK-2929) Recovery of jobs on cluster restarts

2016-07-22 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-2929.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Fixed in FLINK-4166.

> Recovery of jobs on cluster restarts
> 
>
> Key: FLINK-2929
> URL: https://issues.apache.org/jira/browse/FLINK-2929
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 0.10.0
>Reporter: Ufuk Celebi
> Fix For: 1.1.0
>
>
> Recovery information is stored in ZooKeeper under a static root like 
> {{/flink}}. In case of a cluster restart without canceling running jobs old 
> jobs will be recovered from ZooKeeper.
> This can be confusing or helpful depending on the use case.
> I suspect that the confusing case will be more common.
> We can change the default cluster start up (e.g. new YARN session or new 
> ./start-cluster call) to purge all existing data in ZooKeeper and add a flag 
> to not do this if needed.
> [~trohrm...@apache.org], [~aljoscha], [~StephanEwen] what's your opinion?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-3411) Failed recovery can lead to removal of HA state

2016-07-22 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-3411.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Fixed in FLINK-2733 and FLINK-4201.

> Failed recovery can lead to removal of HA state
> ---
>
> Key: FLINK-3411
> URL: https://issues.apache.org/jira/browse/FLINK-3411
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Ufuk Celebi
>Priority: Critical
> Fix For: 1.1.0
>
>
> When a job is recovered by a standby job manager and the recovery of the 
> checkpoint state or job fails, the job might be eventually removed by the job 
> manager after all retries are exhausted. This leads to the removal of the 
> job/checkpoint state in ZooKeeper and the state backend, making it impossible 
> to ever recover the job again.
> We should never exhaust job retries in the HA case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2227) .yarn-properties file is not cleaned up

2016-07-22 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-2227.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Fixed by the recent YARN client refactorings.

> .yarn-properties file is not cleaned up
> ---
>
> Key: FLINK-2227
> URL: https://issues.apache.org/jira/browse/FLINK-2227
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN Client
>Affects Versions: 0.10.0
>Reporter: Ufuk Celebi
>Priority: Minor
> Fix For: 1.1.0
>
>
> The .yarn-properties file is created in ./conf after a YARN session has been 
> started. The client uses this file to submit to the YARN container running 
> the JobManager.
> This file is not cleaned up when the YARN cluster is stopped.
> In the unlikely (?) sequence of 1) start a yarn session, 2) stop the yarn 
> session, 3) start a cluster, 4) try to submit a job to this cluster, the 
> submission does not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3904) GlobalConfiguration doesn't ensure config has been loaded

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390051#comment-15390051
 ] 

ASF GitHub Bot commented on FLINK-3904:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2123
  
Looks like a good fix for now.

I would eventually really like to get rid of the `GlobalConfiguration` 
singleton - it causes issues with embedding, testing, and encourages to not 
cleanly think through designs.

In the end, the `GlobalConfiguration` would only be an XML / YAML loader 
that returns a `Configuration` object.


> GlobalConfiguration doesn't ensure config has been loaded
> -
>
> Key: FLINK-3904
> URL: https://issues.apache.org/jira/browse/FLINK-3904
> Project: Flink
>  Issue Type: Improvement
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> By default, {{GlobalConfiguration}} returns an empty Configuration. Instead, 
> a call to {{get()}} should fail if the config hasn't been loaded explicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2123: [FLINK-3904] GlobalConfiguration doesn't ensure config ha...

2016-07-22 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2123
  
Looks like a good fix for now.

I would eventually really like to get rid of the `GlobalConfiguration` 
singleton - it causes issues with embedding, testing, and encourages to not 
cleanly think through designs.

In the end, the `GlobalConfiguration` would only be an XML / YAML loader 
that returns a `Configuration` object.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4245) Metric naming improvements

2016-07-22 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390044#comment-15390044
 ] 

Stephan Ewen commented on FLINK-4245:
-

Addendum: We need to also include the operator-id (each operator has one in the 
stream config) to the tag map, that should make it fully unique.

> Metric naming improvements
> --
>
> Key: FLINK-4245
> URL: https://issues.apache.org/jira/browse/FLINK-4245
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Stephan Ewen
>
> A metric currently has two parts to it:
>   - The name of that particular metric
>   - The "scope" (or namespace), defined by the group that contains the metric.
> A metric group actually always implicitly has a map of naming "tags", like:
>   - taskmanager_host : 
>   - taskmanager_id : 
>   - task_name : "map() -> filter()"
> We derive the scope from that map, following the defined scope formats.
> For JMX (and some users that use JMX), it would be natural to expose that map 
> of tags. Some users reconstruct that map by parsing the metric scope. JMX, we 
> can expose a metric like:
>   - domain: "taskmanager.task.operator.io"
>   - name: "numRecordsIn"
>   - tags: { "hostname" -> "localhost", "operator_name" -> "map() at 
> X.java:123", ... }
> For many other reporters, the formatted scope makes a lot of sense, since 
> they think only in terms of (scope, metric-name).
> We may even have the formatted scope in JMX as well (in the domain), if we 
> want to go that route. 
> [~jgrier] and [~Zentol] - what do you think about that?
> [~mdaxini] Does that match your use of the metrics?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (FLINK-4245) Metric naming improvements

2016-07-22 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-4245:

Comment: was deleted

(was: 1. As long as we don't have unique operator names you cannot have 
collision free operator metrics. Period. I am getting really tired of 
explaining this.
2. If you only want to change the naming for JMX I suggest to change the tile 
to "JMX naming improvements".
3. Your suggestion regarding the domain goes against JMX best practices. They 
should always start with "org.apache.flink".
4. Please provide a reasoning as to the domain changes.
5. Please provide a comparison as to how a operator and task metric would 
differ, regarding their domain, tags and ObjectName, based on the current 
respective default scope format.
6. In general, using what at one point were called "categories" as keys isn't a 
bad idea. Note however that this becomes inconsistent with user-defined groups, 
which is the reason we currently only use auto-generated keys.
7. Please provide the use-case regarding [~mdaxini]; i am curious as to what 
these changes are supposed to allow.

)

> Metric naming improvements
> --
>
> Key: FLINK-4245
> URL: https://issues.apache.org/jira/browse/FLINK-4245
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Stephan Ewen
>
> A metric currently has two parts to it:
>   - The name of that particular metric
>   - The "scope" (or namespace), defined by the group that contains the metric.
> A metric group actually always implicitly has a map of naming "tags", like:
>   - taskmanager_host : 
>   - taskmanager_id : 
>   - task_name : "map() -> filter()"
> We derive the scope from that map, following the defined scope formats.
> For JMX (and some users that use JMX), it would be natural to expose that map 
> of tags. Some users reconstruct that map by parsing the metric scope. JMX, we 
> can expose a metric like:
>   - domain: "taskmanager.task.operator.io"
>   - name: "numRecordsIn"
>   - tags: { "hostname" -> "localhost", "operator_name" -> "map() at 
> X.java:123", ... }
> For many other reporters, the formatted scope makes a lot of sense, since 
> they think only in terms of (scope, metric-name).
> We may even have the formatted scope in JMX as well (in the domain), if we 
> want to go that route. 
> [~jgrier] and [~Zentol] - what do you think about that?
> [~mdaxini] Does that match your use of the metrics?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3006) TypeExtractor fails on custom type

2016-07-22 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390039#comment-15390039
 ] 

Stephan Ewen commented on FLINK-3006:
-

[~gyfora] Did you manage to reproduce the problem? I think we should otherwise 
close this until you run into the issue again (to prevent accumulating stale 
JIRA issues).

> TypeExtractor fails on custom type
> --
>
> Key: FLINK-3006
> URL: https://issues.apache.org/jira/browse/FLINK-3006
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 0.10.0, 1.0.0
>Reporter: Gyula Fora
>
> I get a weird error when I try to execute my job on the cluster. Locally this 
> works fine but running it from the command line fails during typeextraction:
> input1.union(input2, input3).map(Either:: 
> Left).returns(eventOrLongType);
> The UserEvent type is a subclass of Tuple4 with 
> no extra fields. And the Either type is a regular pojo with 2 public nullable 
> fields and a a default constructor.
> This fails when trying to extract the output type from the mapper, but I 
> wouldnt actually care about that because I am providing my custom type 
> implementation for this Either type.
> The error:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error.
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:512)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:395)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:250)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:669)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:320)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:971)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1021)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:418)
>   at java.util.ArrayList.get(ArrayList.java:431)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoFromInputs(TypeExtractor.java:599)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:493)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1392)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1273)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:560)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:389)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:273)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getMapReturnTypes(TypeExtractor.java:110)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.map(DataStream.java:550)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4213) Provide CombineHint in Gelly algorithms

2016-07-22 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4213.
-
Resolution: Implemented

Implemented in e2ef74ea5a854555f86aefbd8a6b1889ef188ff1

> Provide CombineHint in Gelly algorithms
> ---
>
> Key: FLINK-4213
> URL: https://issues.apache.org/jira/browse/FLINK-4213
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> Many graph algorithms will see better {{reduce}} performance with the 
> hash-combine compared with the still default sort-combine, e.g. HITS and 
> LocalClusteringCoefficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4217) Gelly drivers should read CSV values as strings

2016-07-22 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4217.
-
Resolution: Implemented

Implemented in b71ac354d19dccdd8bfa837b92f6bc814c9d29c6

> Gelly drivers should read CSV values as strings
> ---
>
> Key: FLINK-4217
> URL: https://issues.apache.org/jira/browse/FLINK-4217
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> Gelly drivers ClusteringCoefficient, HITS, JaccardIndex, and TriangleListing 
> parse CSV files as {{LongValue}}. This works for anonymized data sets such as 
> SNAP but should be configurable as {{StringValue}} to handle the general case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4217) Gelly drivers should read CSV values as strings

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390019#comment-15390019
 ] 

ASF GitHub Bot commented on FLINK-4217:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2250


> Gelly drivers should read CSV values as strings
> ---
>
> Key: FLINK-4217
> URL: https://issues.apache.org/jira/browse/FLINK-4217
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> Gelly drivers ClusteringCoefficient, HITS, JaccardIndex, and TriangleListing 
> parse CSV files as {{LongValue}}. This works for anonymized data sets such as 
> SNAP but should be configurable as {{StringValue}} to handle the general case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4213) Provide CombineHint in Gelly algorithms

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390020#comment-15390020
 ] 

ASF GitHub Bot commented on FLINK-4213:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2248


> Provide CombineHint in Gelly algorithms
> ---
>
> Key: FLINK-4213
> URL: https://issues.apache.org/jira/browse/FLINK-4213
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> Many graph algorithms will see better {{reduce}} performance with the 
> hash-combine compared with the still default sort-combine, e.g. HITS and 
> LocalClusteringCoefficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2250: [FLINK-4217] [gelly] Gelly drivers should read CSV...

2016-07-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2250


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2248: [FLINK-4213] [gelly] Provide CombineHint in Gelly ...

2016-07-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2248


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-3298) Streaming connector for ActiveMQ

2016-07-22 Thread Ivan Mushketyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mushketyk reassigned FLINK-3298:
-

Assignee: Ivan Mushketyk

> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Assignee: Ivan Mushketyk
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3298) Streaming connector for ActiveMQ

2016-07-22 Thread Ivan Mushketyk (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389913#comment-15389913
 ] 

Ivan Mushketyk commented on FLINK-3298:
---

I would like to work on this.

> Streaming connector for ActiveMQ
> 
>
> Key: FLINK-3298
> URL: https://issues.apache.org/jira/browse/FLINK-3298
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Mohit Sethi
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3866) StringArraySerializer claims type is immutable; shouldn't

2016-07-22 Thread Ivan Mushketyk (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389902#comment-15389902
 ] 

Ivan Mushketyk commented on FLINK-3866:
---

I'll fix this.

> StringArraySerializer claims type is immutable; shouldn't
> -
>
> Key: FLINK-3866
> URL: https://issues.apache.org/jira/browse/FLINK-3866
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.0.3
>Reporter: Tatu Saloranta
>Assignee: Ivan Mushketyk
>Priority: Minor
>
> Looking at default `TypeSerializer` instances I noticed what looks like a 
> minor flaw, unless I am missing something.
> Whereas all other array serializers indicate that type is not immutable 
> (since in Java, arrays are not immutable), `StringArraySerializer` has:
> ```
>   @Override
>   public boolean isImmutableType() {
>   return true;
>   }
> ```
> and I think it should instead return `false`. I could create a PR, but seems 
> like a small enough thing that issue report makes more sense.
> I tried looking for deps to see if there's a test for this, but couldn't find 
> one; otherwise could submit a test fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-3866) StringArraySerializer claims type is immutable; shouldn't

2016-07-22 Thread Ivan Mushketyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mushketyk reassigned FLINK-3866:
-

Assignee: Ivan Mushketyk

> StringArraySerializer claims type is immutable; shouldn't
> -
>
> Key: FLINK-3866
> URL: https://issues.apache.org/jira/browse/FLINK-3866
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.0.3
>Reporter: Tatu Saloranta
>Assignee: Ivan Mushketyk
>Priority: Minor
>
> Looking at default `TypeSerializer` instances I noticed what looks like a 
> minor flaw, unless I am missing something.
> Whereas all other array serializers indicate that type is not immutable 
> (since in Java, arrays are not immutable), `StringArraySerializer` has:
> ```
>   @Override
>   public boolean isImmutableType() {
>   return true;
>   }
> ```
> and I think it should instead return `false`. I could create a PR, but seems 
> like a small enough thing that issue report makes more sense.
> I tried looking for deps to see if there's a test for this, but couldn't find 
> one; otherwise could submit a test fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4152) TaskManager registration exponential backoff doesn't work

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389887#comment-15389887
 ] 

ASF GitHub Bot commented on FLINK-4152:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2257
  
The latter problem apparently only occurred in my local branch because I 
set `akka.remote.log-remote-lifecycle-events=on` for debugging purposes. The 
former problem with the downloading of the flink-runtime test jar did not seem 
to occur here.

Let's see what the next Travis run says.


> TaskManager registration exponential backoff doesn't work
> -
>
> Key: FLINK-4152
> URL: https://issues.apache.org/jira/browse/FLINK-4152
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, TaskManager, YARN Client
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
> Attachments: logs.tgz
>
>
> While testing Flink 1.1 I've found that the TaskManagers are logging many 
> messages when registering at the JobManager.
> This is the log file: 
> https://gist.github.com/rmetzger/0cebe0419cdef4507b1e8a42e33ef294
> Its logging more than 3000 messages in less than a minute. I don't think that 
> this is the expected behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2257: [FLINK-4152] Allow re-registration of TMs at resource man...

2016-07-22 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2257
  
The latter problem apparently only occurred in my local branch because I 
set `akka.remote.log-remote-lifecycle-events=on` for debugging purposes. The 
former problem with the downloading of the flink-runtime test jar did not seem 
to occur here.

Let's see what the next Travis run says.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4257) Handle delegating algorithm change of class

2016-07-22 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-4257:
-

 Summary: Handle delegating algorithm change of class
 Key: FLINK-4257
 URL: https://issues.apache.org/jira/browse/FLINK-4257
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Affects Versions: 1.1.0
Reporter: Greg Hogan
Assignee: Greg Hogan
 Fix For: 1.1.0


A class created by {{ProxyFactory}} can intercept and reinterpret method calls 
using its {{MethodHandler}}, but is restricted in that
* the type of the proxy class cannot be changed
* method return types must be honored

We have algorithms such as {{VertexDegree}} and {{TriangleListing}} that change 
return type depending on configuration, even between single and dual input 
functions. This can be problematic, e.g. in {{OperatorTranslation}} where we 
test {{dataSet instanceof SingleInputOperator}} or {{dataSet instanceof 
TwoInputOperator}}.

Even simply changing operator can be problematic, e.g. 
{{MapOperator.translateToDataFlow}} returns {{MapOperatorBase}} whereas 
{{ReduceOperator.translateToDataFlow}} returns {{SingleInputOperator}}.

I see two ways to solve these issues. By adding a simple {{NoOpOperator}} that 
is skipped over during {{OperatorTranslation}} we could wrap all algorithm 
output and always be proxying the same class.

Alternatively, making changes only within Gelly we can append a "no-op" 
pass-through {{MapFunction}} to any algorithm output which is not a 
{{SingleInputOperator}}. And {{Delegate can also walk the superclass hierarchy 
such we are always proxying {{SingleInputOperator}}.

There is one additional issue. When we call {{DataSet.output}} the delegate's 
{{MethodHandler}} must reinterpret this call to add itself to the list of sinks.

As part of this issue I will also add manual tests to Gelly for the library 
algorithms which do not have integration tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4152) TaskManager registration exponential backoff doesn't work

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389874#comment-15389874
 ] 

ASF GitHub Bot commented on FLINK-4152:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2257
  
At the moment I'm a little bit clueless why the test case errors occur. One 
test case error is probably related to concurrency issues in the 
`TestingLeaderRetrievalService`. I've fixed them but the error is still 
occurring in my local builds on Travis. Here the problem is that maven does not 
use the installed flink-runtim_2.10-tests.jar which contains the updated 
`TestingLeaderRetrievalService` but instead it downloads the the flink-runtime 
test jar from the snapshot repository which contains the code with the 
concurrency issue.

The other test issue I've seen is that the `flink-yarn-tests` sometimes 
fail because they contain no a
```
2016-07-22 12:01:31,308 ERROR akka.remote.EndpointWriter
- AssociationError [akka.tcp://flink@172.17.2.124:57922] <- 
[akka.tcp://flink@172.17.2.124:36043]: Error [Shut down address: 
akka.tcp://flink@172.17.2.124:36043] [
akka.remote.ShutDownAssociation: Shut down address: 
akka.tcp://flink@172.17.2.124:36043
Caused by: akka.remote.transport.Transport$InvalidAssociationException: The 
remote system terminated the association because it is shutting down.
]
```

in the logs. Since the test checks for exception as a word, it fails. I 
cannot yet explain why this association error now occurs but apparently didn't 
occur before.



> TaskManager registration exponential backoff doesn't work
> -
>
> Key: FLINK-4152
> URL: https://issues.apache.org/jira/browse/FLINK-4152
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, TaskManager, YARN Client
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
> Attachments: logs.tgz
>
>
> While testing Flink 1.1 I've found that the TaskManagers are logging many 
> messages when registering at the JobManager.
> This is the log file: 
> https://gist.github.com/rmetzger/0cebe0419cdef4507b1e8a42e33ef294
> Its logging more than 3000 messages in less than a minute. I don't think that 
> this is the expected behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2257: [FLINK-4152] Allow re-registration of TMs at resource man...

2016-07-22 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2257
  
At the moment I'm a little bit clueless why the test case errors occur. One 
test case error is probably related to concurrency issues in the 
`TestingLeaderRetrievalService`. I've fixed them but the error is still 
occurring in my local builds on Travis. Here the problem is that maven does not 
use the installed flink-runtim_2.10-tests.jar which contains the updated 
`TestingLeaderRetrievalService` but instead it downloads the the flink-runtime 
test jar from the snapshot repository which contains the code with the 
concurrency issue.

The other test issue I've seen is that the `flink-yarn-tests` sometimes 
fail because they contain no a
```
2016-07-22 12:01:31,308 ERROR akka.remote.EndpointWriter
- AssociationError [akka.tcp://flink@172.17.2.124:57922] <- 
[akka.tcp://flink@172.17.2.124:36043]: Error [Shut down address: 
akka.tcp://flink@172.17.2.124:36043] [
akka.remote.ShutDownAssociation: Shut down address: 
akka.tcp://flink@172.17.2.124:36043
Caused by: akka.remote.transport.Transport$InvalidAssociationException: The 
remote system terminated the association because it is shutting down.
]
```

in the logs. Since the test checks for exception as a word, it fails. I 
cannot yet explain why this association error now occurs but apparently didn't 
occur before.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4256) Fine-grained recovery

2016-07-22 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-4256:
---

 Summary: Fine-grained recovery
 Key: FLINK-4256
 URL: https://issues.apache.org/jira/browse/FLINK-4256
 Project: Flink
  Issue Type: Improvement
  Components: JobManager
Affects Versions: 1.1.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.2.0


When a task fails during execution, Flink currently resets the entire execution 
graph and triggers complete re-execution from the last completed checkpoint. 
This is more expensive than just re-executing the failed tasks.

In many cases, more fine-grained recovery is possible.

The full description and design is in the corresponding FLIP.

https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-3725) Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)

2016-07-22 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-3725.
---

> Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)
> 
>
> Key: FLINK-3725
> URL: https://issues.apache.org/jira/browse/FLINK-3725
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.0.1
> Environment: \# java -version
> openjdk version "1.8.0_77"
> OpenJDK Runtime Environment (build 1.8.0_77-b03)
> OpenJDK 64-Bit Server VM (build 25.77-b03, mixed mode)
>Reporter: Maxim Dobryakov
>Assignee: Stephan Ewen
> Fix For: 1.1.0
>
>
> When I start standalone cluster with `bin/jobmanager.sh start cluster` 
> command all works fine but then I am using the same command for HA cluster 
> the JobManager raise error and stop:
> *log/flink--jobmanager-0-example-app-1.example.local.out*
> {code}
> Exception in thread "main" scala.MatchError: ({blob.server.port=6130, 
> state.backend.fs.checkpointdir=s3://s3.example.com/example_staging_flink/checkpoints,
>  blob.storage.directory=/flink/data/blob_storage, jobmanager.heap.mb=1024, 
> fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem, 
> restart-strategy.fixed-delay.attempts=2, recovery.mode=zookeeper, 
> jobmanager.web.port=8081, taskmanager.memory.preallocate=false, 
> jobmanager.rpc.port=0, flink.base.dir.path=/flink/conf/.., 
> recovery.zookeeper.storageDir=s3://s3.example.com/example_staging_flink/recovery,
>  taskmanager.tmp.dirs=/flink/data/task_manager, 
> restart-strategy.fixed-delay.delay=60s, taskmanager.data.port=6121, 
> recovery.zookeeper.path.root=/example_staging/flink, parallelism.default=4, 
> taskmanager.numberOfTaskSlots=4, 
> recovery.zookeeper.quorum=zookeeper-1.example.local:2181,zookeeper-2.example.local:2181,zookeeper-3.example.local:2181,
>  fs.hdfs.hadoopconf=/flink/conf, state.backend=filesystem, 
> restart-strategy=none, recovery.jobmanager.port=6123, 
> taskmanager.heap.mb=2048},CLUSTER,null,org.apache.flink.shaded.com.google.common.collect.Iterators$5@3bf7ca37)
>  (of class scala.Tuple4)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1605)
> at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
> {code}
> *log/flink--jobmanager-0-example-app-1.example.local.log*
> {code}
> 2016-04-11 10:58:31,680 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,696 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,697 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 2016-04-11 10:58:31,699 DEBUG 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, 
> User and group related metrics
> 2016-04-11 10:58:31,951 DEBUG org.apache.hadoop.util.Shell
>   - Failed to detect a valid hadoop home directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:303)
> at org.apache.hadoop.util.Shell.(Shell.java:328)
> at org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
> at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
> at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
> at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:790)
> at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:760)
> at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:633)
> at 
> 

[jira] [Resolved] (FLINK-3725) Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)

2016-07-22 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-3725.
-
   Resolution: Fixed
 Assignee: Stephan Ewen
Fix Version/s: 1.1.0

Fixed as part of 760a0d9e7fd9fa88e9f7408b137d78df384d764f

> Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)
> 
>
> Key: FLINK-3725
> URL: https://issues.apache.org/jira/browse/FLINK-3725
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.0.1
> Environment: \# java -version
> openjdk version "1.8.0_77"
> OpenJDK Runtime Environment (build 1.8.0_77-b03)
> OpenJDK 64-Bit Server VM (build 25.77-b03, mixed mode)
>Reporter: Maxim Dobryakov
>Assignee: Stephan Ewen
> Fix For: 1.1.0
>
>
> When I start standalone cluster with `bin/jobmanager.sh start cluster` 
> command all works fine but then I am using the same command for HA cluster 
> the JobManager raise error and stop:
> *log/flink--jobmanager-0-example-app-1.example.local.out*
> {code}
> Exception in thread "main" scala.MatchError: ({blob.server.port=6130, 
> state.backend.fs.checkpointdir=s3://s3.example.com/example_staging_flink/checkpoints,
>  blob.storage.directory=/flink/data/blob_storage, jobmanager.heap.mb=1024, 
> fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem, 
> restart-strategy.fixed-delay.attempts=2, recovery.mode=zookeeper, 
> jobmanager.web.port=8081, taskmanager.memory.preallocate=false, 
> jobmanager.rpc.port=0, flink.base.dir.path=/flink/conf/.., 
> recovery.zookeeper.storageDir=s3://s3.example.com/example_staging_flink/recovery,
>  taskmanager.tmp.dirs=/flink/data/task_manager, 
> restart-strategy.fixed-delay.delay=60s, taskmanager.data.port=6121, 
> recovery.zookeeper.path.root=/example_staging/flink, parallelism.default=4, 
> taskmanager.numberOfTaskSlots=4, 
> recovery.zookeeper.quorum=zookeeper-1.example.local:2181,zookeeper-2.example.local:2181,zookeeper-3.example.local:2181,
>  fs.hdfs.hadoopconf=/flink/conf, state.backend=filesystem, 
> restart-strategy=none, recovery.jobmanager.port=6123, 
> taskmanager.heap.mb=2048},CLUSTER,null,org.apache.flink.shaded.com.google.common.collect.Iterators$5@3bf7ca37)
>  (of class scala.Tuple4)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1605)
> at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
> {code}
> *log/flink--jobmanager-0-example-app-1.example.local.log*
> {code}
> 2016-04-11 10:58:31,680 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,696 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,697 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 2016-04-11 10:58:31,699 DEBUG 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, 
> User and group related metrics
> 2016-04-11 10:58:31,951 DEBUG org.apache.hadoop.util.Shell
>   - Failed to detect a valid hadoop home directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:303)
> at org.apache.hadoop.util.Shell.(Shell.java:328)
> at org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
> at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
> at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
> at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:790)
> at 
> 

[jira] [Commented] (FLINK-4192) Move Metrics API to separate module

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389776#comment-15389776
 ] 

ASF GitHub Bot commented on FLINK-4192:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2226
  
Had a quick offline discussion with @zentol and @aljoscha with the outcome:

  - Make `flink-metrics-core` strictly the metrics API project for Metrics 
and Reporters
  - Move all implementations of the metric groups and the MetricRegistry to 
`flink-runtime`.
  - `flink-core` is actually metric free and depends on the metric API 
project only.

Users that want to implement reporters only refer to `flink-metrics-core`. 
For tests, a dependency to `flink-runtime` is needed.

We may move further classes to `flink-metric-core` in the future to reduce 
test dependencies on `flink-runtime`, but that is an open issue at this point.


> Move Metrics API to separate module
> ---
>
> Key: FLINK-4192
> URL: https://issues.apache.org/jira/browse/FLINK-4192
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.1.0
>
>
> All metrics code currently resides in flink-core. If a user implements a 
> reporter and wants a fat jar it will now have to include the entire 
> flink-core module.
> Instead, we could move several interfaces into a separate module.
> These interfaces to move include:
> * Counter, Gauge, Histogram(Statistics)
> * MetricGroup
> * MetricReporter, Scheduled, AbstractReporter
> In addition a new MetricRegistry interface will be required as well as a 
> replacement for the Configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2226: [FLINK-4192] - Move Metrics API to separate module

2016-07-22 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2226
  
Had a quick offline discussion with @zentol and @aljoscha with the outcome:

  - Make `flink-metrics-core` strictly the metrics API project for Metrics 
and Reporters
  - Move all implementations of the metric groups and the MetricRegistry to 
`flink-runtime`.
  - `flink-core` is actually metric free and depends on the metric API 
project only.

Users that want to implement reporters only refer to `flink-metrics-core`. 
For tests, a dependency to `flink-runtime` is needed.

We may move further classes to `flink-metric-core` in the future to reduce 
test dependencies on `flink-runtime`, but that is an open issue at this point.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3725) Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)

2016-07-22 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389770#comment-15389770
 ] 

Stephan Ewen commented on FLINK-3725:
-

This looks like a strange classpath issue - probably multiple conflicting 
shaded versions of Guava (of my).

That issue is fixed in 1.1 - no dependency on shaded Guava at that point any 
more.



> Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)
> 
>
> Key: FLINK-3725
> URL: https://issues.apache.org/jira/browse/FLINK-3725
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.0.1
> Environment: \# java -version
> openjdk version "1.8.0_77"
> OpenJDK Runtime Environment (build 1.8.0_77-b03)
> OpenJDK 64-Bit Server VM (build 25.77-b03, mixed mode)
>Reporter: Maxim Dobryakov
>
> When I start standalone cluster with `bin/jobmanager.sh start cluster` 
> command all works fine but then I am using the same command for HA cluster 
> the JobManager raise error and stop:
> *log/flink--jobmanager-0-example-app-1.example.local.out*
> {code}
> Exception in thread "main" scala.MatchError: ({blob.server.port=6130, 
> state.backend.fs.checkpointdir=s3://s3.example.com/example_staging_flink/checkpoints,
>  blob.storage.directory=/flink/data/blob_storage, jobmanager.heap.mb=1024, 
> fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem, 
> restart-strategy.fixed-delay.attempts=2, recovery.mode=zookeeper, 
> jobmanager.web.port=8081, taskmanager.memory.preallocate=false, 
> jobmanager.rpc.port=0, flink.base.dir.path=/flink/conf/.., 
> recovery.zookeeper.storageDir=s3://s3.example.com/example_staging_flink/recovery,
>  taskmanager.tmp.dirs=/flink/data/task_manager, 
> restart-strategy.fixed-delay.delay=60s, taskmanager.data.port=6121, 
> recovery.zookeeper.path.root=/example_staging/flink, parallelism.default=4, 
> taskmanager.numberOfTaskSlots=4, 
> recovery.zookeeper.quorum=zookeeper-1.example.local:2181,zookeeper-2.example.local:2181,zookeeper-3.example.local:2181,
>  fs.hdfs.hadoopconf=/flink/conf, state.backend=filesystem, 
> restart-strategy=none, recovery.jobmanager.port=6123, 
> taskmanager.heap.mb=2048},CLUSTER,null,org.apache.flink.shaded.com.google.common.collect.Iterators$5@3bf7ca37)
>  (of class scala.Tuple4)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1605)
> at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
> {code}
> *log/flink--jobmanager-0-example-app-1.example.local.log*
> {code}
> 2016-04-11 10:58:31,680 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,696 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,697 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 2016-04-11 10:58:31,699 DEBUG 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, 
> User and group related metrics
> 2016-04-11 10:58:31,951 DEBUG org.apache.hadoop.util.Shell
>   - Failed to detect a valid hadoop home directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:303)
> at org.apache.hadoop.util.Shell.(Shell.java:328)
> at org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
> at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
> at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
> at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:790)
> at 
> 

[GitHub] flink pull request #2287: [hotfix] Prevent CheckpointCommitter from failing ...

2016-07-22 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/2287

[hotfix] Prevent CheckpointCommitter from failing job

This PR fixes an issue in the `GenericWriteAheadSink` that @aljoscha 
stumbled upon.

If the sink fails while writing the data into the external storage the sink 
retries again on the next notify; it will never fail the job. However, the 
CheckpointCommitter does not behave that way; if the storage cannot be queried 
it will throw an exception that is not catched by the sink.

The exceptions are now catched and logged by the sink.

@aljoscha Could you try out whether this fixes the issue?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink cass_cc_failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2287.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2287


commit e6c69b5c165489ea513ac134ff9801c78e88e948
Author: zentol 
Date:   2016-07-22T15:50:10Z

[hotfix] Prevent CheckpointCommitter from failing job

Prevents the CheckpointCommitter from failing a job, if either
commitCheckpoint() or isCheckpointCommitter() failed. Instead, we will
try again on the next notify().




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4228) RocksDB semi-async snapshot to S3AFileSystem fails

2016-07-22 Thread Gary Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389743#comment-15389743
 ] 

Gary Yao commented on FLINK-4228:
-

We also want to use RocksDB with checkpointing to s3a. We would prefer if you 
patched this into 1.0.x

> RocksDB semi-async snapshot to S3AFileSystem fails
> --
>
> Key: FLINK-4228
> URL: https://issues.apache.org/jira/browse/FLINK-4228
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>
> Using the {{RocksDBStateBackend}} with semi-async snapshots (current default) 
> leads to an Exception when uploading the snapshot to S3 when using the 
> {{S3AFileSystem}}.
> {code}
> AsynchronousException{com.amazonaws.AmazonClientException: Unable to 
> calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointThread.run(StreamTask.java:870)
> Caused by: com.amazonaws.AmazonClientException: Unable to calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1298)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:108)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:100)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.upload(UploadMonitor.java:192)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:150)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
>   ... 9 more
> {code}
> Running with S3NFileSystem, the error does not occur. The problem might be 
> due to {{HDFSCopyToLocal}} assuming that sub-folders are going to be created 
> automatically. We might need to manually create folders and copy only actual 
> files for {{S3AFileSystem}}. More investigation is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389703#comment-15389703
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898897
  
--- Diff: 
flink-metrics/flink-metrics-statsd/src/test/java/org/apache/flink/metrics/statsd/StatsDReporterTest.java
 ---
@@ -137,9 +140,11 @@ public void testStatsDHistogramReporting() throws 
Exception {
int port = receiver.getPort();
 
Configuration config = new Configuration();
-   
config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
StatsDReporter.class.getName());
-   
config.setString(ConfigConstants.METRICS_REPORTER_INTERVAL, "1 SECONDS");
-   
config.setString(ConfigConstants.METRICS_REPORTER_ARGUMENTS, "--host localhost 
--port " + port);
+   
config.setString(ConfigConstants.METRICS_REPORTERS_LIST, "test");
+   
config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + "test.class", 
StatsDReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389702#comment-15389702
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898876
  
--- Diff: 
flink-metrics/flink-metrics-dropwizard/src/test/java/org/apache/flink/dropwizard/ScheduledDropwizardReporterTest.java
 ---
@@ -67,7 +67,9 @@ public void testAddingMetrics() throws 
NoSuchFieldException, IllegalAccessExcept
String taskManagerId = "tas:kMana::ger";
String counterName = "testCounter";
 
-   configuration.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
"org.apache.flink.dropwizard.ScheduledDropwizardReporterTest$TestingScheduledDropwizardReporter");
+   configuration.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test");
+   configuration.setString(ConfigConstants.METRICS_REPORTER_PREFIX 
+ "test.class", 
"org.apache.flink.dropwizard.ScheduledDropwizardReporterTest$TestingScheduledDropwizardReporter");
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389700#comment-15389700
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898844
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -128,18 +133,26 @@ public Integer getValue() {
@Test
public void testJMXAvailability() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS_SUFFIX, 
TestReporter.class.getName());
+
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.port", "9040-9055");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test2.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898897
  
--- Diff: 
flink-metrics/flink-metrics-statsd/src/test/java/org/apache/flink/metrics/statsd/StatsDReporterTest.java
 ---
@@ -137,9 +140,11 @@ public void testStatsDHistogramReporting() throws 
Exception {
int port = receiver.getPort();
 
Configuration config = new Configuration();
-   
config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
StatsDReporter.class.getName());
-   
config.setString(ConfigConstants.METRICS_REPORTER_INTERVAL, "1 SECONDS");
-   
config.setString(ConfigConstants.METRICS_REPORTER_ARGUMENTS, "--host localhost 
--port " + port);
+   
config.setString(ConfigConstants.METRICS_REPORTERS_LIST, "test");
+   
config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + "test.class", 
StatsDReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389701#comment-15389701
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898858
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -197,7 +204,8 @@ public void testHistogramReporting() throws Exception {
 
try {
Configuration config = new Configuration();
-   
config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
"org.apache.flink.metrics.reporter.JMXReporter");
+   
config.setString(ConfigConstants.METRICS_REPORTERS_LIST, "jmx_test");
+   
config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + "jmx_test.class", 
"org.apache.flink.metrics.reporter.JMXReporter");
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389698#comment-15389698
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898835
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -128,18 +133,26 @@ public Integer getValue() {
@Test
public void testJMXAvailability() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS_SUFFIX, 
TestReporter.class.getName());
+
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898876
  
--- Diff: 
flink-metrics/flink-metrics-dropwizard/src/test/java/org/apache/flink/dropwizard/ScheduledDropwizardReporterTest.java
 ---
@@ -67,7 +67,9 @@ public void testAddingMetrics() throws 
NoSuchFieldException, IllegalAccessExcept
String taskManagerId = "tas:kMana::ger";
String counterName = "testCounter";
 
-   configuration.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
"org.apache.flink.dropwizard.ScheduledDropwizardReporterTest$TestingScheduledDropwizardReporter");
+   configuration.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test");
+   configuration.setString(ConfigConstants.METRICS_REPORTER_PREFIX 
+ "test.class", 
"org.apache.flink.dropwizard.ScheduledDropwizardReporterTest$TestingScheduledDropwizardReporter");
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389694#comment-15389694
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898811
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -79,19 +81,24 @@ public void testGenerateName() {
@Test
public void testPortConflictHandling() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898811
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -79,19 +81,24 @@ public void testGenerateName() {
@Test
public void testPortConflictHandling() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898826
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -79,19 +81,24 @@ public void testGenerateName() {
@Test
public void testPortConflictHandling() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.port", "9020-9035");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test2.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389697#comment-15389697
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898826
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -79,19 +81,24 @@ public void testGenerateName() {
@Test
public void testPortConflictHandling() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.port", "9020-9035");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test2.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898798
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/MetricRegistryTest.java ---
@@ -88,8 +91,10 @@ public void open(Configuration config) {
public void testReporterScheduling() throws InterruptedException {
Configuration config = new Configuration();
 
-   config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter3.class.getName());
-   config.setString(ConfigConstants.METRICS_REPORTER_INTERVAL, "50 
MILLISECONDS");
+   config.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test");
+   config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test.class", TestReporter3.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898844
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -128,18 +133,26 @@ public Integer getValue() {
@Test
public void testJMXAvailability() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS_SUFFIX, 
TestReporter.class.getName());
+
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.port", "9040-9055");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test2.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898858
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -197,7 +204,8 @@ public void testHistogramReporting() throws Exception {
 
try {
Configuration config = new Configuration();
-   
config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
"org.apache.flink.metrics.reporter.JMXReporter");
+   
config.setString(ConfigConstants.METRICS_REPORTERS_LIST, "jmx_test");
+   
config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + "jmx_test.class", 
"org.apache.flink.metrics.reporter.JMXReporter");
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389690#comment-15389690
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898715
  
--- Diff: 
flink-metrics/flink-metrics-statsd/src/test/java/org/apache/flink/metrics/statsd/StatsDReporterTest.java
 ---
@@ -138,12 +138,11 @@ public void testStatsDHistogramReporting() throws 
Exception {
receiverThread.start();
 
int port = receiver.getPort();
-   System.out.println("PORT: " + port);
 
Configuration config = new Configuration();

config.setString(ConfigConstants.METRICS_REPORTERS_LIST, "test");

config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + "test.class", 
StatsDReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3940) Add support for ORDER BY OFFSET FETCH

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389693#comment-15389693
 ] 

ASF GitHub Bot commented on FLINK-3940:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r71898802
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -150,6 +150,41 @@ case class Sort(order: Seq[Ordering], child: 
LogicalNode) extends UnaryNode {
   }
 }
 
+case class Offset(offset: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+child.construct(relBuilder)
+relBuilder.limit(offset, -1)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Offset on stream tables is currently not 
supported.")
+}
+super.validate(tableEnv)
+  }
+}
+
+case class Fetch(fetch: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+
+val newChild = child.asInstanceOf[Offset].child
+newChild.construct(relBuilder)
+val relNode = child.toRelNode(relBuilder).asInstanceOf[LogicalSort]
+relBuilder.limit(RexLiteral.intValue(relNode.offset), fetch)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Fetch on stream tables is currently not 
supported.")
+}
--- End diff --

I think we need to check the 'fetch' is followed after a 'orderby' and 
'offset' here.  Otherwise, the class cast in construct will throw exception.


> Add support for ORDER BY OFFSET FETCH
> -
>
> Key: FLINK-3940
> URL: https://issues.apache.org/jira/browse/FLINK-3940
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Fabian Hueske
>Assignee: GaoLun
>Priority: Minor
>
> Currently only ORDER BY without OFFSET and FETCH are supported.
> This issue tracks the effort to add support for OFFSET and FETCH and involves:
> - Implementing the execution strategy in `DataSetSort`
> - adapting the `DataSetSortRule` to support OFFSET and FETCH
> - extending the Table API and validation to support OFFSET and FETCH and 
> generate a corresponding RelNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3940) Add support for ORDER BY OFFSET FETCH

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389695#comment-15389695
 ] 

ASF GitHub Bot commented on FLINK-3940:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r71898818
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -150,6 +150,41 @@ case class Sort(order: Seq[Ordering], child: 
LogicalNode) extends UnaryNode {
   }
 }
 
+case class Offset(offset: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+child.construct(relBuilder)
+relBuilder.limit(offset, -1)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Offset on stream tables is currently not 
supported.")
+}
--- End diff --

I think we should  check the 'offset' is followed after a 'orderby' here.




> Add support for ORDER BY OFFSET FETCH
> -
>
> Key: FLINK-3940
> URL: https://issues.apache.org/jira/browse/FLINK-3940
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Fabian Hueske
>Assignee: GaoLun
>Priority: Minor
>
> Currently only ORDER BY without OFFSET and FETCH are supported.
> This issue tracks the effort to add support for OFFSET and FETCH and involves:
> - Implementing the execution strategy in `DataSetSort`
> - adapting the `DataSetSortRule` to support OFFSET and FETCH
> - extending the Table API and validation to support OFFSET and FETCH and 
> generate a corresponding RelNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2282: [FLINK-3940] [table] Add support for ORDER BY OFFS...

2016-07-22 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r71898802
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -150,6 +150,41 @@ case class Sort(order: Seq[Ordering], child: 
LogicalNode) extends UnaryNode {
   }
 }
 
+case class Offset(offset: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+child.construct(relBuilder)
+relBuilder.limit(offset, -1)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Offset on stream tables is currently not 
supported.")
+}
+super.validate(tableEnv)
+  }
+}
+
+case class Fetch(fetch: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+
+val newChild = child.asInstanceOf[Offset].child
+newChild.construct(relBuilder)
+val relNode = child.toRelNode(relBuilder).asInstanceOf[LogicalSort]
+relBuilder.limit(RexLiteral.intValue(relNode.offset), fetch)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Fetch on stream tables is currently not 
supported.")
+}
--- End diff --

I think we need to check the 'fetch' is followed after a 'orderby' and 
'offset' here.  Otherwise, the class cast in construct will throw exception.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2282: [FLINK-3940] [table] Add support for ORDER BY OFFS...

2016-07-22 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r71898818
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -150,6 +150,41 @@ case class Sort(order: Seq[Ordering], child: 
LogicalNode) extends UnaryNode {
   }
 }
 
+case class Offset(offset: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+child.construct(relBuilder)
+relBuilder.limit(offset, -1)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Offset on stream tables is currently not 
supported.")
+}
--- End diff --

I think we should  check the 'offset' is followed after a 'orderby' here.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898835
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/reporter/JMXReporterTest.java 
---
@@ -128,18 +133,26 @@ public Integer getValue() {
@Test
public void testJMXAvailability() throws Exception {
Configuration cfg = new Configuration();
-   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter.class.getName());
+   cfg.setString(ConfigConstants.METRICS_REPORTER_CLASS_SUFFIX, 
TestReporter.class.getName());
+
+   cfg.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test1,test2");
+
+   cfg.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test1.class", JMXReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389691#comment-15389691
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898790
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/MetricRegistryTest.java ---
@@ -42,7 +42,8 @@
public void testReporterInstantiation() {
Configuration config = new Configuration();
 
-   config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter1.class.getName());
+   config.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test");
+   config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test.class", TestReporter1.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389692#comment-15389692
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898798
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/MetricRegistryTest.java ---
@@ -88,8 +91,10 @@ public void open(Configuration config) {
public void testReporterScheduling() throws InterruptedException {
Configuration config = new Configuration();
 
-   config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter3.class.getName());
-   config.setString(ConfigConstants.METRICS_REPORTER_INTERVAL, "50 
MILLISECONDS");
+   config.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test");
+   config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test.class", TestReporter3.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898790
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/metrics/MetricRegistryTest.java ---
@@ -42,7 +42,8 @@
public void testReporterInstantiation() {
Configuration config = new Configuration();
 
-   config.setString(ConfigConstants.METRICS_REPORTER_CLASS, 
TestReporter1.class.getName());
+   config.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
"test");
+   config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
"test.class", TestReporter1.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898715
  
--- Diff: 
flink-metrics/flink-metrics-statsd/src/test/java/org/apache/flink/metrics/statsd/StatsDReporterTest.java
 ---
@@ -138,12 +138,11 @@ public void testStatsDHistogramReporting() throws 
Exception {
receiverThread.start();
 
int port = receiver.getPort();
-   System.out.println("PORT: " + port);
 
Configuration config = new Configuration();

config.setString(ConfigConstants.METRICS_REPORTERS_LIST, "test");

config.setString(ConfigConstants.METRICS_REPORTER_PREFIX + "test.class", 
StatsDReporter.class.getName());
--- End diff --

using `.class` instead of `METRICS_REPORTER_CLASS_SUFFIX`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389687#comment-15389687
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898559
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/WindowWordCount.java
 ---
@@ -17,12 +17,13 @@
 
 package org.apache.flink.streaming.examples.windowing;
 
+import org.apache.flink.api.common.functions.FilterFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
 import org.apache.flink.api.java.tuple.Tuple2;
-import org.apache.flink.api.java.utils.ParameterTool;
-import org.apache.flink.examples.java.wordcount.util.WordCountData;
--- End diff --

well these changes don't belong here :)


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389688#comment-15389688
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2285
  
Jip, but that's not done very often, only when instantiating the 
`MetricRegistry`


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389686#comment-15389686
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898375
  
--- Diff: docs/apis/metrics.md ---
@@ -229,21 +229,25 @@ or by assigning unique names to jobs and operators.
 
 Metrics can be exposed to an external system by configuring one or several 
reporters in `conf/flink-conf.yaml`.
 
-- `metrics.reporters`: The list of named reporters, i.e. "foo,bar".
+- `metrics.reporters`: The list of named reporters.
 - `metrics.reporter..`: Generic setting `` for the 
reporter named ``.
 - `metrics.reporter..class`: The reporter class to use for the 
reporter named ``.
 - `metrics.reporter..interval`: The reporter interval to use for the 
reporter named ``.
 
-All reporters must at least have the `class` config, some allow specifying 
a reporting `interval`. Below,
+All reporters must at least have the `class` property, some allow 
specifying a reporting `interval`. Below,
 we will list more settings specific to each reporter.
 
-Example reporter configuration for using the built-in JMX reporter:
+Example reporter configuration that specifies multiple reporters:
 
 ```
-metrics.reporters: my_jmx_reporter
+metrics.reporters: my_jmx_reporter,my_other_reporter
 
 metrics.reporter.my_jmx_reporter.class: 
org.apache.flink.metrics.reporter.JMXReporter
-metrics.reporter.my_jmx_reporter.port: 9020-940
+metrics.reporter.my_jmx_reporter.port: 9020-9040
+
+metrics.reporter.my_jmx_reporter.class: 
org.apache.flink.metrics.graphite.GraphiteReporter
--- End diff --

should be named my_other_reporter


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2285: [FLINK-4246] Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2285
  
Jip, but that's not done very often, only when instantiating the 
`MetricRegistry`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898559
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/WindowWordCount.java
 ---
@@ -17,12 +17,13 @@
 
 package org.apache.flink.streaming.examples.windowing;
 
+import org.apache.flink.api.common.functions.FilterFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
 import org.apache.flink.api.java.tuple.Tuple2;
-import org.apache.flink.api.java.utils.ParameterTool;
-import org.apache.flink.examples.java.wordcount.util.WordCountData;
--- End diff --

well these changes don't belong here :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389685#comment-15389685
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2285
  
That would imply iterating over the entire configuration and checking for 
the prefix, correct?


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898375
  
--- Diff: docs/apis/metrics.md ---
@@ -229,21 +229,25 @@ or by assigning unique names to jobs and operators.
 
 Metrics can be exposed to an external system by configuring one or several 
reporters in `conf/flink-conf.yaml`.
 
-- `metrics.reporters`: The list of named reporters, i.e. "foo,bar".
+- `metrics.reporters`: The list of named reporters.
 - `metrics.reporter..`: Generic setting `` for the 
reporter named ``.
 - `metrics.reporter..class`: The reporter class to use for the 
reporter named ``.
 - `metrics.reporter..interval`: The reporter interval to use for the 
reporter named ``.
 
-All reporters must at least have the `class` config, some allow specifying 
a reporting `interval`. Below,
+All reporters must at least have the `class` property, some allow 
specifying a reporting `interval`. Below,
 we will list more settings specific to each reporter.
 
-Example reporter configuration for using the built-in JMX reporter:
+Example reporter configuration that specifies multiple reporters:
 
 ```
-metrics.reporters: my_jmx_reporter
+metrics.reporters: my_jmx_reporter,my_other_reporter
 
 metrics.reporter.my_jmx_reporter.class: 
org.apache.flink.metrics.reporter.JMXReporter
-metrics.reporter.my_jmx_reporter.port: 9020-940
+metrics.reporter.my_jmx_reporter.port: 9020-9040
+
+metrics.reporter.my_jmx_reporter.class: 
org.apache.flink.metrics.graphite.GraphiteReporter
--- End diff --

should be named my_other_reporter


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2285: [FLINK-4246] Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2285
  
That would imply iterating over the entire configuration and checking for 
the prefix, correct?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389681#comment-15389681
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898033
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/metrics/MetricRegistry.java ---
@@ -75,77 +78,92 @@ public MetricRegistry(Configuration config) {
this.delimiter = delim;
 
// second, instantiate any custom configured reporters
-   
-   final String className = 
config.getString(ConfigConstants.METRICS_REPORTER_CLASS, null);
-   if (className == null) {
+   this.reporters = new ArrayList<>();
+
+   final String definedReporters = 
config.getString(ConfigConstants.METRICS_REPORTERS_LIST, null);
+
+   if (definedReporters == null) {
+   // no reporters defined
// by default, don't report anything
LOG.info("No metrics reporter configured, no metrics 
will be exposed/reported.");
-   this.reporter = null;
this.executor = null;
-   }
-   else {
-   MetricReporter reporter;
-   ScheduledExecutorService executor = null;
-   try {
-   String configuredPeriod = 
config.getString(ConfigConstants.METRICS_REPORTER_INTERVAL, null);
-   TimeUnit timeunit = TimeUnit.SECONDS;
-   long period = 10;
-   
-   if (configuredPeriod != null) {
-   try {
-   String[] interval = 
configuredPeriod.split(" ");
-   period = 
Long.parseLong(interval[0]);
-   timeunit = 
TimeUnit.valueOf(interval[1]);
+   } else {
+   // we have some reporters so
+   String[] namedReporters = definedReporters.split(",");
+   for (String namedReporter : namedReporters) {
--- End diff --

yes. but this will go completely unnoticed to a user since nothing is being 
logged.


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71898033
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/metrics/MetricRegistry.java ---
@@ -75,77 +78,92 @@ public MetricRegistry(Configuration config) {
this.delimiter = delim;
 
// second, instantiate any custom configured reporters
-   
-   final String className = 
config.getString(ConfigConstants.METRICS_REPORTER_CLASS, null);
-   if (className == null) {
+   this.reporters = new ArrayList<>();
+
+   final String definedReporters = 
config.getString(ConfigConstants.METRICS_REPORTERS_LIST, null);
+
+   if (definedReporters == null) {
+   // no reporters defined
// by default, don't report anything
LOG.info("No metrics reporter configured, no metrics 
will be exposed/reported.");
-   this.reporter = null;
this.executor = null;
-   }
-   else {
-   MetricReporter reporter;
-   ScheduledExecutorService executor = null;
-   try {
-   String configuredPeriod = 
config.getString(ConfigConstants.METRICS_REPORTER_INTERVAL, null);
-   TimeUnit timeunit = TimeUnit.SECONDS;
-   long period = 10;
-   
-   if (configuredPeriod != null) {
-   try {
-   String[] interval = 
configuredPeriod.split(" ");
-   period = 
Long.parseLong(interval[0]);
-   timeunit = 
TimeUnit.valueOf(interval[1]);
+   } else {
+   // we have some reporters so
+   String[] namedReporters = definedReporters.split(",");
+   for (String namedReporter : namedReporters) {
--- End diff --

yes. but this will go completely unnoticed to a user since nothing is being 
logged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389679#comment-15389679
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2285
  
Thanks for the thorough review @zentol! I'm addressing the comments in a 
new commit.

About the move to `MetricConfig`, I think the Configuration could be 
extended for that case to move all parameters with a certain prefix.


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2285: [FLINK-4246] Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2285
  
Thanks for the thorough review @zentol! I'm addressing the comments in a 
new commit.

About the move to `MetricConfig`, I think the Configuration could be 
extended for that case to move all parameters with a certain prefix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4203) Improve Table API documentation

2016-07-22 Thread Jark Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389674#comment-15389674
 ] 

Jark Wu commented on FLINK-4203:


We should also improve [Table API Operators 
section|https://ci.apache.org/projects/flink/flink-docs-master/apis/table.html#table-api-operators].
 Add an additional column to describe whether it is supported in Batch or 
Streaming or Both.  

In addition, maybe we should add a new section to describe SQL syntax we 
supported similar to Table API Operator section.

> Improve Table API documentation
> ---
>
> Key: FLINK-4203
> URL: https://issues.apache.org/jira/browse/FLINK-4203
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Some ideas:
> - Add a list of all supported scalar functions and a description
> - Add a more advanced example
> - Describe supported data types



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389671#comment-15389671
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71896724
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/metrics/MetricRegistry.java ---
@@ -75,77 +78,92 @@ public MetricRegistry(Configuration config) {
this.delimiter = delim;
 
// second, instantiate any custom configured reporters
-   
-   final String className = 
config.getString(ConfigConstants.METRICS_REPORTER_CLASS, null);
-   if (className == null) {
+   this.reporters = new ArrayList<>();
+
+   final String definedReporters = 
config.getString(ConfigConstants.METRICS_REPORTERS_LIST, null);
+
+   if (definedReporters == null) {
+   // no reporters defined
// by default, don't report anything
LOG.info("No metrics reporter configured, no metrics 
will be exposed/reported.");
-   this.reporter = null;
this.executor = null;
-   }
-   else {
-   MetricReporter reporter;
-   ScheduledExecutorService executor = null;
-   try {
-   String configuredPeriod = 
config.getString(ConfigConstants.METRICS_REPORTER_INTERVAL, null);
-   TimeUnit timeunit = TimeUnit.SECONDS;
-   long period = 10;
-   
-   if (configuredPeriod != null) {
-   try {
-   String[] interval = 
configuredPeriod.split(" ");
-   period = 
Long.parseLong(interval[0]);
-   timeunit = 
TimeUnit.valueOf(interval[1]);
+   } else {
+   // we have some reporters so
+   String[] namedReporters = definedReporters.split(",");
+   for (String namedReporter : namedReporters) {
--- End diff --

In that case the array should be empty and we should not enter the loop, 
right?


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2285: [FLINK-4246] Allow Specifying Multiple Metrics Rep...

2016-07-22 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71896724
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/metrics/MetricRegistry.java ---
@@ -75,77 +78,92 @@ public MetricRegistry(Configuration config) {
this.delimiter = delim;
 
// second, instantiate any custom configured reporters
-   
-   final String className = 
config.getString(ConfigConstants.METRICS_REPORTER_CLASS, null);
-   if (className == null) {
+   this.reporters = new ArrayList<>();
+
+   final String definedReporters = 
config.getString(ConfigConstants.METRICS_REPORTERS_LIST, null);
+
+   if (definedReporters == null) {
+   // no reporters defined
// by default, don't report anything
LOG.info("No metrics reporter configured, no metrics 
will be exposed/reported.");
-   this.reporter = null;
this.executor = null;
-   }
-   else {
-   MetricReporter reporter;
-   ScheduledExecutorService executor = null;
-   try {
-   String configuredPeriod = 
config.getString(ConfigConstants.METRICS_REPORTER_INTERVAL, null);
-   TimeUnit timeunit = TimeUnit.SECONDS;
-   long period = 10;
-   
-   if (configuredPeriod != null) {
-   try {
-   String[] interval = 
configuredPeriod.split(" ");
-   period = 
Long.parseLong(interval[0]);
-   timeunit = 
TimeUnit.valueOf(interval[1]);
+   } else {
+   // we have some reporters so
+   String[] namedReporters = definedReporters.split(",");
+   for (String namedReporter : namedReporters) {
--- End diff --

In that case the array should be empty and we should not enter the loop, 
right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4192) Move Metrics API to separate module

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389641#comment-15389641
 ] 

ASF GitHub Bot commented on FLINK-4192:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2226
  
Actually, we do not even have to move the JobID, if the runtime-specific 
parts (the component metric groups) are in `flink-runtime`. That part, I 
believe, we should do anyways.

It certainly is nice to have a complete "self-contained" metrics project 
with everything. That way, people can actually build their own metrics tooling 
using some of the implementation classes, or they can set up self-contained 
tests for reporters (without having flink-core) as a test dependency. If it 
were not for the `NetUtils`, I would suggest to go for that. The 
`Preconditions` are used only for `checkNotNull`, which one can do via 
`java.util.Objects.requireNonNull` as well.

On the other side of the argument are the `NetUtils` (one utility function 
for port ranges) and making the MetricRegistry use MetricConfig in all places.


> Move Metrics API to separate module
> ---
>
> Key: FLINK-4192
> URL: https://issues.apache.org/jira/browse/FLINK-4192
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.1.0
>
>
> All metrics code currently resides in flink-core. If a user implements a 
> reporter and wants a fat jar it will now have to include the entire 
> flink-core module.
> Instead, we could move several interfaces into a separate module.
> These interfaces to move include:
> * Counter, Gauge, Histogram(Statistics)
> * MetricGroup
> * MetricReporter, Scheduled, AbstractReporter
> In addition a new MetricRegistry interface will be required as well as a 
> replacement for the Configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2226: [FLINK-4192] - Move Metrics API to separate module

2016-07-22 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2226
  
Actually, we do not even have to move the JobID, if the runtime-specific 
parts (the component metric groups) are in `flink-runtime`. That part, I 
believe, we should do anyways.

It certainly is nice to have a complete "self-contained" metrics project 
with everything. That way, people can actually build their own metrics tooling 
using some of the implementation classes, or they can set up self-contained 
tests for reporters (without having flink-core) as a test dependency. If it 
were not for the `NetUtils`, I would suggest to go for that. The 
`Preconditions` are used only for `checkNotNull`, which one can do via 
`java.util.Objects.requireNonNull` as well.

On the other side of the argument are the `NetUtils` (one utility function 
for port ranges) and making the MetricRegistry use MetricConfig in all places.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389534#comment-15389534
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2285#discussion_r71882146
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -651,14 +651,38 @@
 
//  Metrics 
---
 
+   // Per reporter:
--- End diff --

i think this comment and the "For all reporters" don't stand out enough to 
be included. My first impression was that these were left-overs.


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4246) Allow Specifying Multiple Metrics Reporters

2016-07-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389588#comment-15389588
 ] 

ASF GitHub Bot commented on FLINK-4246:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2285
  
The usage of the `DelagatingConfiguration` poses a few problems in regards 
to #2226. After that PR reporters will no longer work on a `Configuration` but 
instead a `MetricConfig` object that extends `Properties`. For this to work 
certain key have to be extracted from the `Configuration` and passes into the 
`MetricConfig`. Naturally, for this to work you must know the names of all keys 
required. This doesn't work for arbitrary user-defined settings though. We have 
to revert back to a `metrics.reporter..arguments` property.


> Allow Specifying Multiple Metrics Reporters
> ---
>
> Key: FLINK-4246
> URL: https://issues.apache.org/jira/browse/FLINK-4246
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.1.0
>
>
> We should allow specifying multiple reporters. A rough sketch of how the 
> configuration should look like is this:
> {code}
> metrics.reporters = foo,bar
> metrics.reporter.foo.class = JMXReporter.class
> metrics.reporter.foo.port = 42-117
> metrics.reporter.bar.class = GangliaReporter.class
> metrics.reporter.bar.port = 512
> metrics.reporter.bar.whatever = 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >