Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@apiwoni Your expectation for 1.1.3 is wrong. seekToEnd doesn't seek to the
last committed offset, it seeks to the last offset. One of the fixes made in
this PR is to make doSeekRetriablePartitions
Github user apiwoni commented on the issue:
https://github.com/apache/storm/pull/1679
@srdo I'm not sure why you expect the poll to return records after
seekToEnd? seekToEnd seeks to the end of the log in Kafka, and it doesn't
actually do the seek until poll or position is called on
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@apiwoni Okay, just wanted to know because 1.0.x doesn't handle topic
compaction very well, and I thought it might be related.
Your log looks a lot like this issue. I'm not sure why you expect
Github user apiwoni commented on the issue:
https://github.com/apache/storm/pull/1679
@srdo We are not using topic compaction.
---
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@apiwoni You may be able to upgrade just the spout if you don't want to do
a full cluster upgrade. I think the core APIs haven't changed since 1.0.0.
Are you using topic compaction in Kafka by
Github user apiwoni commented on the issue:
https://github.com/apache/storm/pull/1679
We are on Apache Store 1.0.4 and I''m trying to convince people to upgrade
because I suspect we are experiencing this bug.
Here's what I see based on nextTuple cycle:
LOG: Topic
Github user harshach commented on the issue:
https://github.com/apache/storm/pull/1679
Thanks for the patch @jfenc91 and for being patient with reviews.
Merged into master. I would like to merge this into 1.x-branch as well.
There are few issues on SingleTopicKafkaSpoutTest.java
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/1679
+1
@jfenc91 thanks for the extra cleanup.
@revans2 as far as I am concerned this patch is ready to merge. Thanks for
your clarifying comments.
---
If your project is set up for it,
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
@hmcl thanks for the review. I believe I made all the requested changes
here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/1679
+1 overall. It would be ideal to cleanup the test code according to my
earlier suggestion. Thanks for the work @jfenc91
---
If your project is set up for it, you can reply to this email and have your
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/1679
@revans2 reviewing this at this moment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/1679
@revans2 apologies for the delay... started it and then something came in
the way... will finish by tomorrow for sure. Will do my best to get this merged
in asap...
---
If your project is set up for
Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/1679
@hmcl it has been a week, any hope of finishing the review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/1679
@hmcl take your time. This is a critical piece of storm code and I really
would like it right.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/1679
@revans2 I will finish my review by lunch time PST
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/1679
I looked through the code and I didn't see anything that concerned me, and
the travis test failure appears to be unrelated so I am +1 pending feedback
form @hmcl
@harshach you also
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
Yes. I think supporting unclean leader election in a nice way is a separate
issue to this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/1679
@srdo @hmcl have all of your concerns been addressed?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
Hey, trying not to let this thread die. Could we get some help here from a
committer? @HeartSaVioR @revans2 Thanks!!
---
If your project is set up for it, you can reply to this email and have
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
This PR LGTM at this point. I think we should raise a separate issue for
better unclean leader election support, when that gets implemented the double
acking test should probably be replaced with one
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
I bumped the logging level for unexpected offsets to Warn. Anyway can
we get this and the other PR merged @srdo @hmcl? Admittedly there is still work
that needs to be done, but this PR is pretty
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
Thought about it a little more. The spout should probably just check when
it polls messages that the messages are not behind the committedOffset. If they
are, the committedOffset needs to move to the
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@jfenc91 Hope it works for you now. You're right though that the spout
needs to handle unclean leader elections better. It's not great that the
consumer group will remain "ahead" of the LEO and the
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
I am on version 0.10.0.1. Interesting, didn't realize unclean leader
election was a default... So ya I had that enabled. Turning that off now
though! Thanks @srdo for the tip and hopefully that will
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@jfenc91 Ouch. I've never seen LEO drop like that unless unclean leader
election was enabled. Out of curiosity which Kafka version are you running, and
do you have unclean leader election enabled for
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
@srdo @hmcl I tracked down one source of the "double acking"
It looks like I am being switched to reading from an out of sync replica.
Looking at my offsets:
GROUP
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@hmcl I agree that if there really is a double acking problem somewhere
else, it should be fixed there.
In your scenario say the spout commits 1...5 to Kafka and 4 is later acked.
ackedMsgs
Github user hmcl commented on the issue:
https://github.com/apache/storm/pull/1679
@srdo @jfenc91 I am on vacation this week (with limited access to Internet)
and I will be back on Monday. **Can we please holding on merging this until I
can finish my review on Monday**. I implemented
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
This more or less seems done to me. The only thing that bugs me is Storm
double acking tuples. The only case I could think of is if tuples time out and
they're later acked by the acker bolt, but it
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
@srdo Alright I think everything has been addressed here. I have actually
been running this merged with your other PR for the last 12 hours processing
100M tuples and its looked pretty good. Only
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
Thats fine @srdo . Thanks again for the reviews. I also created STORM-2106
to keep track of the consequences of this change.
---
If your project is set up for it, you can reply to this email and
Github user srdo commented on the issue:
https://github.com/apache/storm/pull/1679
@jfenc91 I borrowed a few classes from this PR for another change to the
spout here https://github.com/apache/storm/pull/1696. I hope you don't mind :)
---
If your project is set up for it, you can
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
Thanks for the review here @srdo! I will have a few more PRs headed your
way in a week or two for this kafka client to make it fully usable.
---
If your project is set up for it, you can reply to
Github user jfenc91 commented on the issue:
https://github.com/apache/storm/pull/1679
A larger refactor here is probably needed to make this more performant.
These changes understandably seem to make the spout struggle to get the
processing tuple count anywhere near the max spout
34 matches
Mail list logo