Hi all,
I'm running kafka 0.8.1.1, and encountering a weird problem.
All my partitions' leader became -1 some day, and the ISR became empty, for
example
Topic:infinity PartitionCount:12 ReplicationFactor:2 Configs:
Topic: infinity Partition: 0Leader: -1 Replicas: 0,1
I have a 3 node kafka cluster running 0.8.1.1, recently updated from 0.8.1
and noticing now that producing from Ruby/Poseidon is having trouble. If
I'm reading correctly, it appears that the Poseidon is attempting to
produce on partition 1 on kafka1, but partition 1 is on kafka1.
Does this look
Hi Marcin,
A few weeks ago, I did an upgrade to 0.8.1.1 and then augmented the cluster
from 3 to 9 brokers. All went smoothly.
In a dev environment, we found out that the biggest pain point is to have
to deal with the json file and the error-prone command line interface.
So to make our life
that makes sense. if the size of each fetch is small then compression won't
do much, and that could very well explain the increase in bandwidth.
we will try to change these settings and see what happens.
thanks a lot for your help.
T#
On Tue, Sep 2, 2014 at 10:44 PM, Guozhang Wang
Thanks Jun.
I'll create a jira and try to provide a patch. I think this is pretty
serious.
On Friday, August 29, 2014, Jun Rao jun...@gmail.com wrote:
The goal of batching is mostly to reduce the # RPC calls to the broker. If
compression is enabled, a larger batch typically implies better
That's what I said in my first reply. :-)
-
http://www.philipotoole.com
On Tuesday, September 2, 2014 10:37 PM, Gwen Shapira gshap...@cloudera.com
wrote:
I believe a simpler solution would be to create multiple
ConsumerConnector, each with 1
Sorry, I guess I missed that. The followup discussion was around the
simple consumer :)
I'm not sure why the OP didn't find this solution acceptable.
On Wed, Sep 3, 2014 at 8:29 AM, Philip O'Toole philip.oto...@yahoo.com wrote:
That's what I said in my first reply. :-)
Subscription
--
Massimiliano Tomassi
web: http://about.me/maxtomassi
e-mail: max.toma...@gmail.com
mobile: +447751193667
You should follow those instructions
http://kafka.apache.org/contact.html
François Langelier
Étudiant en génie Logiciel - École de Technologie Supérieure
http://www.etsmtl.ca/
Capitaine Club Capra http://capra.etsmtl.ca/
VP-Communication - CS Games http://csgames.org 2014
Jeux de Génie
Sorry I picked the wrong address
2014-09-03 18:45 GMT+01:00 François Langelier f.langel...@gmail.com:
You should follow those instructions
http://kafka.apache.org/contact.html
François Langelier
Étudiant en génie Logiciel - École de Technologie Supérieure
http://www.etsmtl.ca/
Hiya,
During leader changes, we see short periods of message loss on some of our
higher volume producers. I suspect that this is because it takes a couple of
seconds for Zookeeper to notice and notify the producers of the metadata
change. During this time, producer buffers can fill up and
Hi Jun,
We have similar problem. We have variable length of messages. So when we
have fixed size of Batch sometime the batch exceed the limit set on the
brokers (2MB).
So can Producer have some extra logic to determine the optimal batch size
by looking at configured message.max.bytes value.
Hi Bhavesh
can you explain what limit you're referring to?
I'm asking because `message.max.bytes` is applied per message not per batch.
is there another limit I should be aware of?
thanks
On Wed, Sep 3, 2014 at 2:07 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com
wrote:
Hi Jun,
We have
We can still do with single ConsumerConnector with multiple threads.
Each thread updates its own data in zookeeper.The below one is our own
implementation of commitOffset.
public void commitOffset(DESMetadata metaData) {
log.debug(Update offsets only for -+ metaData.toString());
Thanks, Balaji!
It looks like your approach depends on specific implementation
details, such as the directory structure in ZK.
In this case it doesn't matter much since the APIs are not stable yet,
but in general, wouldn't you prefer to use public APIs, even if it
means multiple consumers without
I am referring to wiki http://kafka.apache.org/08/configuration.html and
following parameter control max batch message bytes as far as I know.
Kafka Community, please correct me if I am wrong. I do not want to create
confusion for Kafka User Community here. Also, if you increase this limit
than
Multiple consumers with single thread each will also work.
Only problem is no of connections to Kafka is increased.
-Original Message-
From: Gwen Shapira [mailto:gshap...@cloudera.com]
Sent: Wednesday, September 03, 2014 4:20 PM
To: users@kafka.apache.org
Cc: Philip O'Toole
Subject: Re:
Thanks got the idea !! But it will create a fragments for example
Main Thread reads 0-50 messages give to Thread 1 for bulk index and commit
0 to 50 offset...
Main Thread reads 51-100 message give to Thread 2 for bulk index and
commit 51 100 offset...
So Zookeeper might have offset that will
Only problem is no of connections to Kafka is increased.
*Why* is it a problem?
Philip
are you referring to socket.request.max.bytes ?
it looks like it could indeed limit the size of a batch accepted by a
broker.
So, you're right, batch.num.messages * message.max.bytes must be smaller
than socket.request.max.bytes.
It looks like this case has been addressed in the new producer.
See
This is not how I'd expect this to work.
Offsets are per-partition and each thread reads its own partition
(Assuming you use Balaji's solution).
So:
Thread 1 reads messages 1..50 from partition 1, processes, indexes,
whatever and commits.
Thread 2 reads messages 1..55 from partition 2, processes,
Hey Jonathan, I just sent an email on the dev list to discuss this.
Not to be double effort but for posterity and good thread communication if
you can voice your opinion there it would be great (please)
I would volunteer to release 0.8.1.2, not a problem.
My concern with 0.8.2 is with any new
22 matches
Mail list logo