This kind of sounds to me like there’s packet loss somewhere and TCP is closing
the window to try to limit congestion. But from the snippets you posted, I
didn’t see any sacks in the tcpdump output. If there *are* sacks, that’d be a
strong indicator of loss somewhere, whether it’s in the
Can you run lsof -p (pid) for whatever the pid is for your Kafka process?
For the fd limits you've set, I don't think subtlety is required: if there's a
millionish lines in the output, the fd limit you set is where you think it is,
and if it's a lot lower than that, the limit isn't being
There may be more elegant ways to do this, but I'd think that you could just
ls all the directories specified in log.dirs in your server.properties file for
Kafka. You should see directories for each topicname-partitionnumber there.
Offhand it sounds to me like maybe something's evicting
have thought you'd want ZK up before Kafka started, but I don't
have any strong data to back that up.
On Sat, 8 Aug 2015 at 7:59 AM Steve Miller st...@idrathernotsay.com wrote:
So... we had an extensive recabling exercise, during which we had to
shut down and derack and rerack a whole
So... we had an extensive recabling exercise, during which we had to shut
down and derack and rerack a whole Kafka cluster. Then when we brought it back
up, we discovered the hard way that two hosts had their rebuild on reboot
flag set in Cobbler.
Everything on those hosts is gone as a
Are you using mumrah/kafka-python? I think so from context but I know
there's at least one other implementation rattling around these days. (-:
If that's what you're using, I can see two potential problems you might be
having. You can set the offset to some approximation of wherever you
[ BTW, after some more research, I think what might be happening here is that
we had some de-facto network partitioning happen as a side-effect of us
renaming some network interfaces, though if that's the case, I'd like to know
how to get everything back into sync. ]
Hi. I'm seeing
FWIW I like the standardization idea but just making the old switches fail
seems like it's not the best plan. People wrap this sort of thing for any
number of reasons, and breaking all of their stuff all at once is not going to
make them happy. And it's not like keeping the old switches
Also, if log.cleaner.enable is true in your broker config, that enables the
log-compaction retention strategy.
Then, for topics with the per-topic cleanup.policy=compact config
parameter set, Kafka will scan the topic periodically, nuking old versions of
the data with the same key.
I
hours have affect on this? I didn't
knew this. I have log.retention.hours set to 1, and during development we
test this once a 15 mins or hour or 2. So do you think this is causing the
issue?
Thanks,
Pradeep Simha
Technical Lead
-Original Message-
From: Steve Miller [mailto:st
.
Perhaps compaction would help in this scenario too?
https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction
From: Steve Miller [st...@idrathernotsay.com]
Sent: 20 August 2014 08:47
To: users@kafka.apache.org
Subject: Re: Keep on getting
Also, what do you have log.retention.hours set to? How often do you publish
messages?
I can envision a scenario in which you don't publish to a topic often, and
in fact publish so infrequently that everything in the topic ages out from
log.retention.hours first.
I don't know exactly
wrote:
What's in there seems to be still the output for deep iteration. For
shallow iteration, the compression codec for each message should be Snappy.
Thanks,
Jun
On Fri, Aug 15, 2014 at 5:27 AM, Steve Miller st...@idrathernotsay.com
wrote:
Oh, yeah, sorry about that. I threw
[ Aha!, you say, now I know why this guy's been doing so much tshark stuff!
(-: ]
Hi. I'm running into a strange situation, in which more or less all of the
topics on our Kafka server behave exactly as expected... but the data produced
by one family of applications is producing fairly
14 matches
Mail list logo