Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r130058480
--- Diff:
flink-connectors/flink-connector-kafka-0.8/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/Kafka08Fetcher.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r130058404
--- Diff:
flink-connectors/flink-connector-kafka-0.8/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/Kafka08Fetcher.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r130058523
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4414
Thanks for clarifying!
+1 to merge this to `master` and `release-1.3`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4414
Thanks for the fix and your notice on the instabilities @NicoK!
Could you check if my understanding of the issue is correct?:
- `Kafka010ITCase.testCommitOffsetsToKafka` was failing
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4408
Hi @Ashwin4590, this seems to be opened by accident?
Could you close the PR? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129234261
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129234765
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129235825
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129235807
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129234639
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129234789
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129235367
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129234336
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129233893
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129234426
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129233968
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4368#discussion_r129217538
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,342
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4389
Nice catch! +1, LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4390
+1, LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4150
From the looks of https://github.com/druid-io/druid/issues/4456, could it
be that we need to update our AWS Java SDK version?
---
If your project is set up for it, you can reply to this email
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4150
@bowenli86 but isn't the shading of httpcomponents in the Kinesis consumer
supposed to avoid conflicts with whatever version you're using for
S3AFileSystem?
---
If your project is set up
Github user tzulitai closed the pull request at:
https://github.com/apache/flink/pull/4386
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4386
Thanks for the review @pnowojski!
I'm also using this branch to collect some final backports for
`release-1.3`. Will merge once Travis is green.
---
If your project is set up for it, you
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4387
[FLINK-7143] [kafka] Forward ports of new Kafka tests to master
This PR forward ports all new tests added in #4357 to `master`, so that the
behaviors is correctly guarded there also
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4386
(backport-1.3) [FLINK-7174] [kafka] Bump Kafka 0.10 dependency to 0.10.2.1
Backport of #4321 to `release-1.3`, with the following things being
different:
1. No need to touch
Github user tzulitai closed the pull request at:
https://github.com/apache/flink/pull/4363
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4375
Thanks for the PR!
Since this was already thoroughly discussed, this LGTM.
Merging ..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/3911
I think this LGTM now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
This was merged via #4357. Closing ..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tzulitai closed the pull request at:
https://github.com/apache/flink/pull/4301
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4302
Merging :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tzulitai closed the pull request at:
https://github.com/apache/flink/pull/4344
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tzulitai closed the pull request at:
https://github.com/apache/flink/pull/4357
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4363
Merging ..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4357
@aljoscha sure, merging this now :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4357#discussion_r128755399
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -517,16
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4321
Looks good now, +1 on my side.
Lets also wait a bit for @StephanEwen to see if he has any more comments
regarding the use of an extra `hasAssignedPartitions` field (since he commented
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4149
@zhenzhongxu at the very least, I would expose these new values as
protected methods, and let the fields be private. Then, have particular tests
that verify the values are correct
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128428437
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,317
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128428361
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer011.java
---
@@ -0,0
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4357
@aljoscha do you have any last comments on these changes?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
Ok :) LGTM, merging ..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4015
I think the version bump is due to the fact that there are broken API
behaviour since moving to 0.10.2.1. See #4321 (I think there's duplicate work
here and there).
---
If your project is set up
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4321#discussion_r128178914
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4321#discussion_r128180555
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4321#discussion_r128180697
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThreadTest.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4321#discussion_r128179835
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4321#discussion_r128181037
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThreadTest.java
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4239
Ok :) I can agree that we keep 321a142 a separate commit.
For df6d5e0 to 5ff8106, I actually found it easier to ignore all that
(because a lot of it is irrelevant in the end) and went straight
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/3915
@zjureel just so you know, I'm currently a bit busy with other critical
bugs in the Kafka consumer.
Please bear with me a little bit more ..
---
If your project is set up for it, you can reply
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
Travis seems to have a large amount of abnormal timeouts, though. I'm not
sure if it is really related to this change or otherwise. Could you do a rebase
on the latest master so that the recent
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
@pnowojski alright, that makes sense.
You don't actually need a separate Flink job because you can just add a
completely non-attached graph that consumes from the topic within the same job
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4239
One other comment regarding the commits:
I would argue that df6d5e0 to 5ff8106 should not appear in the commit log
history, since in the end we actually have a completely new producer for 011
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4239
Regarding how I would proceed with this big contribution:
Lets first try to clean up the commits that are bundled all together here.
1. I would first try to merge #4321 (the first 4
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4239
Thanks a lot for opening a pull request for this very important feature
@pnowojski.
I did a rough first pass and had some comments I would like to clear out
first (this is a big chunk of code
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128164632
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,317
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128166188
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,317
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128164162
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java
---
@@ -0,0
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128163984
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java
---
@@ -0,0
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128167617
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java
---
@@ -0,0
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128166034
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,317
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128164127
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java
---
@@ -0,0
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128166822
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,317
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128163374
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer011.java
---
@@ -0,0
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4239#discussion_r128168357
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java
---
@@ -0,0 +1,317
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4363
[FLINK-7224] [kafka, docs] Fix incorrect Javadoc / docs regarding Kafka
partition metadata querying
Since Flink 1.3, partition metadata is no longer queried on the client
side. This commit
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
I would like to revisit tests that use this method by first discussing:
wouldn't it be more appropriate to have a validating mapper function that
throws a `SuccessException` once it sees all
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4321
One other question: I need a bit more context on why the version bump
requires that change in the `KafkaConsumerThread`. From what I perceive, that
should be an separate issue to fix hot looping
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4321
@pnowojski we can't just drop that test, IMO. It's crucial that those tests
exist to guard against incorrect reassignment logic in the
`KafkaConsumerThread`. Breaking that would mess up
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4361
Thanks for the pull request @zjureel.
While you're on this, could you also fix this for Kafka 08 and 09?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4357#discussion_r128147490
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -517,16
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4357#discussion_r128147542
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -517,16
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4357#discussion_r128147294
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -517,16
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4344#discussion_r127951191
--- Diff:
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseTest.java
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4357
R: @aljoscha
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4357
(release-1.3) [FLINK-7143, FLINK-7195] Collection of Kafka fixes for
release-1.3
This PR subsumes #4344 and #4301, including changes in both PRs merged and
conflicts resolved.
Apparently
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4344#discussion_r127935505
--- Diff:
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseTest.java
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4344
@aljoscha the issue is that there may be missing partitions when querying
partitions from Kafka (e.g., if some brokers are down).
---
If your project is set up for it, you can reply to this email
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4344
For this change, I think we also need to verify how the consumers behave
when some restored partition is no longer reachable. (since previously, no
longer reachable partitions will be filtered out
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4149
+1 agree with @aljoscha here.
The main problem with this change is that there is no usages of it in other
parts of the codebase, and can very easily be removed "accidentally" in t
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r127636481
--- Diff:
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r127636725
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -505,6
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r127636859
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaCommitCallback.java
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4187#discussion_r127636523
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
---
@@ -185,6
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
@aljoscha on some second thinking, I don't think we can easily deal with
the fact that, when restoring from 1.3.1 / 1.3.0 savepoints in 1.3.2, users
will not benefit from this bug fix
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4344
(release-1.3) [FLINK-7195] [kafka] Remove partition list querying when
restoring state in FlinkKafkaConsumer
This issue is a re-appearance of FLINK-6006. On restore, we should not
respect any
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4150
Thanks! Just for double check: have you on your side already verified that
this works when you're using the connector with S3 (which caused the issue for
you before)?
---
If your project is set
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4150#discussion_r127135589
--- Diff: flink-connectors/flink-connector-kinesis/pom.xml ---
@@ -56,6 +58,18 @@ under the License
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
Maybe some method Javadoc explaining that would be nice.
From the method name `getAllRecordsFromTopic`, the behaviour isn't that
obvious.
---
If your project is set up for it, you can reply
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
@pnowojski only checking that we're actually not reading the same topic
again in the tests.
So, the change in this PR is just to make sure that in the case we do do
that using
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
@StephanEwen
Regarding no-rediscover on restore test:
yes, could say that it is covered in
`KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest()`. It's
an end-to-end
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
This would then mean we discourage restoring from a 1.3.x savepoint,
because the state is potentially incorrect.
I wonder if we then actually want to always fetch partitions on startup
(fresh
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
Yes, that is true. This assignment logic is only applied on fresh starts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4310
Shouldn't we actually be deleting the topics after the test finishes?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/4301#discussion_r126962857
--- Diff:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicPartitionAssigner.java
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
@StephanEwen thanks for the review. Your suggestion makes a lot of sense.
I've fixed this up as the following:
- Have a new method `KafkaTopicAssigner.assign(KafkaTopicPartition
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4302
(master) [FLINK-7143] [kafka] Stricter tests for deterministic partition
assignment
This is the forward port of the fix in #4301. The `master` branch does not
require a fix, because the new
GitHub user tzulitai opened a pull request:
https://github.com/apache/flink/pull/4301
[FLINK-7143] [kafka] Fix indeterminate partition assignment in
FlinkKafkaConsumer
This PR changes the mod operation for partition assignment from `i %
numTasks == subtaskIndex
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4301
R: @aljoscha @rmetzger
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
801 - 900 of 1924 matches
Mail list logo