Re: Mirrormaker consumption slowness

2017-12-06 Thread Steve Miller
This kind of sounds to me like there’s packet loss somewhere and TCP is closing the window to try to limit congestion. But from the snippets you posted, I didn’t see any sacks in the tcpdump output. If there *are* sacks, that’d be a strong indicator of loss somewhere, whether it’s in the

Re: Too Many Open Files

2016-08-01 Thread Steve Miller
Can you run lsof -p (pid) for whatever the pid is for your Kafka process? For the fd limits you've set, I don't think subtlety is required: if there's a millionish lines in the output, the fd limit you set is where you think it is, and if it's a lot lower than that, the limit isn't being

Re: Debugging high log flush latency on a broker.

2015-09-22 Thread Steve Miller
There may be more elegant ways to do this, but I'd think that you could just ls all the directories specified in log.dirs in your server.properties file for Kafka. You should see directories for each topicname-partitionnumber there. Offhand it sounds to me like maybe something's evicting

Disaster Recovery (was: Re: Suggestions when all replicas of a partition are dead?)

2015-08-08 Thread Steve Miller
have thought you'd want ZK up before Kafka started, but I don't have any strong data to back that up. On Sat, 8 Aug 2015 at 7:59 AM Steve Miller st...@idrathernotsay.com wrote: So... we had an extensive recabling exercise, during which we had to shut down and derack and rerack a whole

Suggestions when all replicas of a partition are dead?

2015-08-07 Thread Steve Miller
So... we had an extensive recabling exercise, during which we had to shut down and derack and rerack a whole Kafka cluster. Then when we brought it back up, we discovered the hard way that two hosts had their rebuild on reboot flag set in Cobbler. Everything on those hosts is gone as a

Re: kafka-python message offset?

2015-07-29 Thread Steve Miller
Are you using mumrah/kafka-python? I think so from context but I know there's at least one other implementation rattling around these days. (-: If that's what you're using, I can see two potential problems you might be having. You can set the offset to some approximation of wherever you

When in-sync isn't in sync?

2015-05-04 Thread Steve Miller
[ BTW, after some more research, I think what might be happening here is that we had some de-facto network partitioning happen as a side-effect of us renaming some network interfaces, though if that's the case, I'd like to know how to get everything back into sync. ] Hi. I'm seeing

Re: [DISCUSS] KIP-14 Tools Standardization

2015-04-09 Thread Steve Miller
FWIW I like the standardization idea but just making the old switches fail seems like it's not the best plan. People wrap this sort of thing for any number of reasons, and breaking all of their stuff all at once is not going to make them happy. And it's not like keeping the old switches

Re: The purpose of key in kafka

2014-12-19 Thread Steve Miller
Also, if log.cleaner.enable is true in your broker config, that enables the log-compaction retention strategy. Then, for topics with the per-topic cleanup.policy=compact config parameter set, Kafka will scan the topic periodically, nuking old versions of the data with the same key. I

Re: Keep on getting kafka.common.OffsetOutOfRangeException: Random times

2014-08-20 Thread Steve Miller
hours have affect on this? I didn't knew this. I have log.retention.hours set to 1, and during development we test this once a 15 mins or hour or 2. So do you think this is causing the issue? Thanks, Pradeep Simha Technical Lead -Original Message- From: Steve Miller [mailto:st

Re: Keep on getting kafka.common.OffsetOutOfRangeException: Random times

2014-08-20 Thread Steve Miller
. Perhaps compaction would help in this scenario too? https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction From: Steve Miller [st...@idrathernotsay.com] Sent: 20 August 2014 08:47 To: users@kafka.apache.org Subject: Re: Keep on getting

Re: Keep on getting kafka.common.OffsetOutOfRangeException: Random times

2014-08-19 Thread Steve Miller
Also, what do you have log.retention.hours set to? How often do you publish messages? I can envision a scenario in which you don't publish to a topic often, and in fact publish so infrequently that everything in the topic ages out from log.retention.hours first. I don't know exactly

Re: Strange topic-corruption issue?

2014-08-17 Thread Steve Miller
wrote: What's in there seems to be still the output for deep iteration. For shallow iteration, the compression codec for each message should be Snappy. Thanks, Jun On Fri, Aug 15, 2014 at 5:27 AM, Steve Miller st...@idrathernotsay.com wrote: Oh, yeah, sorry about that. I threw

Strange topic-corruption issue?

2014-08-12 Thread Steve Miller
[ Aha!, you say, now I know why this guy's been doing so much tshark stuff! (-: ] Hi. I'm running into a strange situation, in which more or less all of the topics on our Kafka server behave exactly as expected... but the data produced by one family of applications is producing fairly