This is resolved. As I missed host entry configuration in my infrastructure.
On Mon, May 4, 2015 at 10:35 AM, Kamal C kamaltar...@gmail.com wrote:
We are running ZooKeeper in ensemble (Cluster of 3 / 5). With further
investigation, I found that the Connect Exception throws for all inflight
We observed the exact same error. Not very clear about the root cause
although it appears to be related to leastLoadedNode implementation.
Interestingly, the problem went away by increasing the value of
reconnect.backoff.ms to 1000ms.
On 29 Apr 2015 00:32, Ewen Cheslack-Postava e...@confluent.io
Guozhang,
Do you have the ticket number for possibly adding in local log file
failover? Is it actively being worked on?
Thanks,
Jason
On Tue, May 5, 2015 at 6:11 PM, Guozhang Wang wangg...@gmail.com wrote:
Does this log file acts as a temporary disk buffer when broker slows
down, whose data
I'm not sure about the old producer behavior in this same failure scenario,
but creating a new producer instance would resolve the issue since it would
start with the list of bootstrap nodes and, assuming at least one of them
was up, it would be able to fetch up to date metadata.
On Tue, May 5,
I have a topic consisting of n partitions. To have distributed processing I
create two processes running on different machines. They subscribe to the
topic with same groupd id and allocate n/2 threads, each of which processes
single stream(n/2 partitions per process).
With this I will have
1. KAFKA-1955 https://issues.apache.org/jira/browse/KAFKA-1955, I
think Jay has a WIP patch for it.
2.
3.
On Tue, May 5, 2015 at 5:10 PM, Jason Rosenberg j...@squareup.com wrote:
Guozhang,
Do you have the ticket number for possibly adding in local log file
failover? Is it
I agree that to find the least Loaded node the producer should fall back to
the bootstrap nodes if its not able to connect to any nodes in the current
metadata. That should resolve this.
Rahul, I suppose the problem went off because the dead node in your case
might have came back up and allowed
I filed this jira, fwiw: https://issues.apache.org/jira/browse/KAFKA-2172
Jason
On Mon, Mar 23, 2015 at 2:44 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
Hi Jason,
Yes, I agree the restriction makes the usage of round-robin less flexible.
I think the focus of round-robin strategy is
I asked about this same issue in a previous thread. Thanks for reminding
me, I've added this Jira: https://issues.apache.org/jira/browse/KAFKA-2172
I think this is a great new feature, but it's unfortunately the all
consumers must be the same is just a bit too restrictive.
Jason
On Tue, May
Does this log file acts as a temporary disk buffer when broker slows
down, whose data will be re-send to broker later, or do you plan to use it
as a separate persistent storage as Kafka brokers?
For the former use case, I think there is an open ticket for integrating
this kind of functionality
Mayuresh,
I was testing this in a development environment and manually brought down a
node to simulate this. So the dead node never came back up.
My colleague and I were able to consistently see this behaviour several
times during the testing.
On 5 May 2015 20:32, Mayuresh Gharat
Does block.on.buffer.full=false do what you want?
-Jay
On Tue, May 5, 2015 at 1:59 AM, mete efk...@gmail.com wrote:
Hello Folks,
I was looking through the kafka.producer metrics on the JMX interface, to
find a good indicator when to trip the circuit. So far it seems like the
Sure, i kind of count on that actually, i guess with this setting the
sender blocks on allocate method and this bufferpool-wait-ratio increases.
I want to fully compartmentalize the kafka producer from the rest of the
system. Ex: writing to a log file instead of trying to send to kafka when
some
Hi all,
I have been trying to modify one of the Kafka wiki pages [1] to correct
a few outdated code examples but it turns out that my Confluence account
(miguno [2]) apparently does not have edit permissions.
The Page Restrictions for [1] are listed as:
- No view restrictions are defined
Hello Folks,
I was looking through the kafka.producer metrics on the JMX interface, to
find a good indicator when to trip the circuit. So far it seems like the
bufferpool-wait-ratio metric is a useful decision mechanism when to cut
off the production to kafka.
As far as i experienced, when kafka
Hi everyone,
We recently switched to round robin partition assignment after we noticed
that range partition assignment (default) will only make use of the first X
consumers were X is the number of partitions for a topic our consumers are
interested in. We then noticed the caveat in round robin,
16 matches
Mail list logo