[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-24 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4310 Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 Ok :) LGTM, merging .. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4310 yes, Travis passes @tzulitai :) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4310 I didn't know that you can have disconnected graph in Flink :) It shouldn't be caused by this commit, since it is included in my other PR. Rebased and let's make sure that it passes. ---

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 Travis seems to have a large amount of abnormal timeouts, though. I'm not sure if it is really related to this change or otherwise. Could you do a rebase on the latest master so that the recent

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 @pnowojski alright, that makes sense. You don't actually need a separate Flink job because you can just add a completely non-attached graph that consumes from the topic within the same job. But

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4310 For consumer side or mapper side it is natural to use that kind of validating mappers, because you could just add them at the end of your pipeline. For producers tests it isn't, because

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-19 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 I would like to revisit tests that use this method by first discussing: wouldn't it be more appropriate to have a validating mapper function that throws a `SuccessException` once it sees all

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-12 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 Maybe some method Javadoc explaining that would be nice. From the method name `getAllRecordsFromTopic`, the behaviour isn't that obvious. --- If your project is set up for it, you can reply to

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-12 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 @pnowojski only checking that we're actually not reading the same topic again in the tests. So, the change in this PR is just to make sure that in the case we do do that using the

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-12 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4310 Yes and as far as I know, we are doing this. Why do you ask? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests

2017-07-12 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4310 Shouldn't we actually be deleting the topics after the test finishes? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project