[GitHub] flink pull request #4239: [FLINK-6988] flink-connector-kafka-0.11 with exact...

2017-08-23 Thread rangadi
Github user rangadi commented on a diff in the pull request: https://github.com/apache/flink/pull/4239#discussion_r134838729 --- Diff: flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java --- @@ -0,0

[GitHub] flink pull request #4239: [FLINK-6988] flink-connector-kafka-0.11 with exact...

2017-08-18 Thread rangadi
Github user rangadi commented on a diff in the pull request: https://github.com/apache/flink/pull/4239#discussion_r134044031 --- Diff: flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java --- @@ -0,0

[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-10 Thread rangadi
Github user rangadi commented on the issue: https://github.com/apache/flink/pull/4239 Yep, that makes sense. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes

[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-10 Thread rangadi
Github user rangadi commented on the issue: https://github.com/apache/flink/pull/4239 I guess you could store the transactional.id for _next_ transaction in committed state. That way the new task starts the new transaction with the name stored in state which automatically aborts the

[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-09 Thread rangadi
Github user rangadi commented on the issue: https://github.com/apache/flink/pull/4239 > Hmmm, are you sure about this thing? That would mean that Kafka doesn't support transactional parallel writes from two different process, which would be very strange. Could you point to

[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-09 Thread rangadi
Github user rangadi commented on the issue: https://github.com/apache/flink/pull/4239 May be an extra shuffle to make small batches could help. Another option is to buffer all the records in state and write them all inside commit(). But not sure how costly it is to save all the

[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-08 Thread rangadi
Github user rangadi commented on the issue: https://github.com/apache/flink/pull/4239 How does exactly-once sink handle large gap between `preCommit()` and `recoverAndCommit()` in case of a recovery? The server seems to abort a transaction after a timeout `max.transaction.timeout.ms