Having the same question: what happened to 0.8.2 release, when it's
supposed to happen?
Thanks.
On Tue, Sep 30, 2014 at 12:49 PM, Jonathan Weeks
jonathanbwe...@gmail.com wrote:
I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at least 6-8
weeks out.
If we truly believe that
Hello Guozhang,
Is storing offsets in kafka topic already in master branch?
We would like to use that feature, when do you plan to release 0.8.2?
Can we use master branch meanwhile (i.e. is it stable enough).
Thanks.
On Fri, Aug 8, 2014 at 1:38 PM, Guozhang Wang wangg...@gmail.com wrote:
Hi
Hello Jun, is new producer, consumer and offset management in the
trunk already? Can we start developing libraries with 0.8.2 support
against trunk?
Thanks!
On Tue, Jul 8, 2014 at 9:32 PM, Jun Rao jun...@gmail.com wrote:
Yes, 0.8.2 is compatible with 0.8.0 and 0.8.1 in terms of wire protocols
machines ZooKeeper can only handle the
failure of a single machine; if two machines fail, the remaining two
machines do not constitute a majority. However, with five machines
ZooKeeper can handle the failure of two machines.
Hope that helps.
On Tue, Jun 24, 2014 at 12:36 PM, Kane Kane kane.ist
it will be still
possible, but it *might* fail). In case of a 5-node cluster having 1 node
down is not that risky, because you still have 4 nodes and you need only 3
of them to reach quorum.
M.
Kind regards,
MichaĆ Michalski,
michal.michal...@boxever.com
On 25 June 2014 09:59, Kane Kane
.
On Tue, Jun 24, 2014 at 2:44 AM, Hemath Kumar hksrckmur...@gmail.com
wrote:
Yes kane i have the replication factor configured as 3
On Tue, Jun 24, 2014 at 2:42 AM, Kane Kane kane.ist...@gmail.com wrote:
Hello Neha, can you explain your statements:
Bringing one node down in a cluster will go
Sorry, i meant 5 nodes in previous question.
On Tue, Jun 24, 2014 at 12:36 PM, Kane Kane kane.ist...@gmail.com wrote:
Hello Neha,
ZK cluster of 3 nodes will tolerate the loss of 1 node, but if there is a
subsequent leader election for any reason, there is a chance that the
cluster does
Hello Neha, can you explain your statements:
Bringing one node down in a cluster will go smoothly only if your
replication factor is 1 and you enabled controlled shutdown on the brokers.
Can you elaborate your notion of smooth? I thought if you have
replication factor=3 in this case, you should
at 3:49 PM, Guozhang Wang wangg...@gmail.com wrote:
In the old producer yes, in the new producer (available in 0.8.1.1) the
batch size is by bytes instead of #. messages, which gives you a better
control.
Guozhang
On Sat, Jun 7, 2014 at 2:48 PM, Kane Kane kane.ist...@gmail.com wrote:
Ah
Last time I've checked it, producer sticks to partition for 10 minutes.
On Mon, Jun 9, 2014 at 4:13 PM, Prakash Gowri Shankor
prakash.shan...@gmail.com wrote:
Hi,
This is with 0.8.1.1 and I ran the command line console consumer.
I have one broker, one producer and several consumers. I have
Yes, messages were compressed with gzip and I've enabled the same
compression in mirrormaker producer.
On Sat, Jun 7, 2014 at 12:56 PM, Guozhang Wang wangg...@gmail.com wrote:
Kane,
Did you use any compression method?
Guozhang
On Fri, Jun 6, 2014 at 2:15 PM, Kane Kane kane.ist
Hello all,
After updating to latest 0.8.1.1 jmxtrans fails with Connection timeout error.
I can connect with jconsole, but it is very unstable, reconnects every
few seconds and sometimes fails with
May 01, 2014 2:03:13 PM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check connection:
to bump up the limit for the number of open file handles per
broker though.
Thanks,
Neha
On Tue, Mar 25, 2014 at 3:41 PM, Kane Kane kane.ist...@gmail.com
wrote:
Is there a recommended cap for the concurrent producers threads?
We plan to have around 4000 connections across cluster writing to
kafka
Is there a recommended cap for the concurrent producers threads?
We plan to have around 4000 connections across cluster writing to
kafka, i assume there shouldn't be any performance implications
related to that?
Thanks.
Is it possible to update from 0.8 on the fly (rolling upgrade)?
On Wed, Mar 12, 2014 at 2:17 PM, Michael G. Noll
mich...@michael-noll.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Many thanks to everyone involved in the release!
Please let me share two comments:
One, there's a
How it's possible to have async consumer?
On Sun, Nov 10, 2013 at 11:06 AM, Marc Labbe mrla...@gmail.com wrote:
Hi David,
check for mahendra's fork of kafka-python, he has implemented gevent
support in a branch (https://github.com/mahendra/kafka-python/tree/gevent) but
it hasn't made it to
New client rewrite proposal includes async Producer, but not async
Consumer, i think there is a reason. You can not send new consume
request before previous one is finished?
On Sun, Nov 10, 2013 at 11:42 AM, Marc Labbe mrla...@gmail.com wrote:
Kane, you can probably achieve async consumer using
it will be useful if we can put an overall limit on total log
size, so disk doesn't get full.
Also what is the recovery strategy in this case? Is it possible to
recover from this state or I have to delete all data?
Thanks.
On Tue, Nov 5, 2013 at 9:11 PM, Kane Kane kane.ist...@gmail.com wrote:
I've checked
to enable cross
compilation to scala 2.10.2?
Thanks,
Jun
On Mon, Nov 4, 2013 at 9:53 PM, Kane Kane kane.ist...@gmail.com wrote:
I'm using it with scala 2.10.2.
On Mon, Nov 4, 2013 at 9:41 PM, Jun Rao jun...@gmail.com wrote:
Do you think our cross compilation can be extended to scala 2.10.2
Hello,
What would happen if disk is full? Does it make sense to have
additional variable to set the maximum size for all logs combined?
Thanks.
neha.narkh...@gmail.com wrote:
You are probably looking for log.retention.bytes. Refer to
http://kafka.apache.org/documentation.html#brokerconfigs
On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane kane.ist...@gmail.com wrote:
Hello,
What would happen if disk is full? Does it make sense to have
On 11/1/13, 9:36 PM, Kane Kane kane.ist...@gmail.com wrote:
Yes, I've had problem, which resolved with updating sbt-assembly. Will
open a ticket and provide a patch.
On Fri, Nov 1, 2013 at 8:43 PM, Jun Rao jun...@gmail.com wrote:
Does the problem exist with trunk? If so, could you open
Thanks, Jun.
On Sat, Nov 2, 2013 at 8:31 PM, Jun Rao jun...@gmail.com wrote:
The # of required open file handlers is # client socket connections + # log
segment and index files.
Thanks,
Jun
On Fri, Nov 1, 2013 at 10:28 PM, Kane Kane kane.ist...@gmail.com wrote:
I had only 1 topic
, it
sounds like a bug. Can you file a JIRA?
Thanks,
Neha
On Fri, Nov 1, 2013 at 1:51 AM, Kane Kane kane.ist...@gmail.com wrote:
When machine with kafka dies, most often broker cannot start itself with
errors about index files being corrupt. After i delete them manually
broker
usually can
to be related?
Thanks.
On Fri, Nov 1, 2013 at 3:50 AM, Neha Narkhede neha.narkh...@gmail.comwrote:
I think you are hitting https://issues.apache.org/jira/browse/KAFKA-824.
Was the consumer being shutdown at that time?
On Fri, Nov 1, 2013 at 1:28 AM, Kane Kane kane.ist...@gmail.com wrote
Hello Jun Rao, not it's not the head, I've compiled it a couple of weeks
ago. Should i try with latest?
On Fri, Nov 1, 2013 at 7:58 AM, Jun Rao jun...@gmail.com wrote:
Are you using the latest code in the 0.8 branch?
Thanks,
Jun
On Fri, Nov 1, 2013 at 7:36 AM, Kane Kane kane.ist
jun...@gmail.com wrote:
Are you using the latest code in the 0.8 branch?
Thanks,
Jun
On Fri, Nov 1, 2013 at 7:36 AM, Kane Kane kane.ist...@gmail.com wrote:
Neha, yes when i kill it with -9, sure I will file a bug.
Thanks.
On Fri, Nov 1, 2013 at 3:43 AM, Neha
, Kane Kane kane.ist...@gmail.com wrote:
Hello Neha, I think i might be hitting this. I didn't shutdown the
consumer
(at least intentionally). Basically it was just an attempt to pipe ~1T
through kafka, i would wild guess it's related to log expansion. Because
around the time it happened i saw
wangg...@gmail.com wrote:
Currently the index files will only be deleted on startup if there are
any
.swap file indicating the server crashed while opening the log segments.
We
should probably change this logic.
Guozhang
On Fri, Nov 1, 2013 at 8:16 AM, Kane Kane kane.ist
at 4:14 PM, Kane Kane kane.ist...@gmail.com wrote:
I think
addSbtPlugin(com.eed3si9n % sbt-assembly % 0.8.8)
Should be updated to 0.9.0 at least to successfully compile. I've had
an issue with assembly-package-dependency.
I had only 1 topic with 45 partitions replicated across 3 brokers.
After several hours of uploading some data to kafka 1 broker died with
the following exception.
I guess i can fix it raising limit for open files, but I wonder how it
happened under described circumstances.
[2013-11-02
, Kane Kane kane.ist...@gmail.com wrote:
Hello Neha, does it mean even if not all replica acknowledged and timeout
kicked in and producer get an exception - message still will be written?
Thanks.
On Thu, Oct 24, 2013 at 8:08 PM, Neha Narkhede neha.narkh...@gmail.com
wrote
I have cluster of 3 kafka brokers. With the following script I send some
data to kafka and in the middle do the controlled shutdown of 1 broker. All
3 brokers are ISR before I start sending. When i shutdown the broker i get
a couple of exceptions and I expect data shouldn't be written. Say, I send
, the message may still be committed.
Did you shut down the leader broker of the partition or a follower broker?
Guozhang
On Fri, Oct 25, 2013 at 8:45 AM, Kane Kane kane.ist...@gmail.com wrote:
I have cluster of 3 kafka brokers. With the following script I send some
data to kafka
Or, to rephrase it more generally, is there a way to know exactly if
message was committed or no?
On Fri, Oct 25, 2013 at 10:43 AM, Kane Kane kane.ist...@gmail.com wrote:
Hello Guozhang,
My partitions are split almost evenly between broker, so, yes - broker
that I shutdown is the leader
, it means the
message may or may not have been committed.
Guozhang
On Fri, Oct 25, 2013 at 8:05 AM, Kane Kane kane.ist...@gmail.com wrote:
Hello Neha,
Can you explain please what this means:
request.timeout.ms - The amount of time the broker will wait trying to
meet
behaviour considering kafka prefers to append data to partitions fot
performance reasons.
The best way to right now deal with duplicate msgs is to build the
processing engine (layer where your consumer sits) to deal with at least
once semantics of the broker.
On 25 Oct 2013 23:23, Kane Kane
, Oct 25, 2013 at 11:26 AM, Kane Kane kane.ist...@gmail.com wrote:
Hello Aniket,
Thanks for the answer, this totally makes sense and implementing that
layer on consumer side
to check for dups sound like a good solution to this issue.
Can we get a confirmation from kafka devs that this is how
sits) to deal with at least
once semantics of the broker.
On 25 Oct 2013 23:23, Kane Kane kane.ist...@gmail.com wrote:
Or, to rephrase it more generally, is there a way to know exactly if
message was committed or no?
On Fri, Oct 25, 2013 at 10:43 AM, Kane Kane kane.ist
:
Kane and Aniket,
I am interested in knowing what the pattern/solution that people
usually
use to implement exactly once as well.
-Steve
On Fri, Oct 25, 2013 at 11:39 AM, Kane Kane kane.ist...@gmail.com
wrote:
Guozhang, but i've posted a piece from kafka documentation above
.
Guozhang
On Fri, Oct 25, 2013 at 5:00 PM, Kane Kane kane.ist...@gmail.com wrote:
There are a lot of exceptions, I will try to pick an example of each:
ERROR async.DefaultEventHandler - Failed to send requests for topics
benchmark with correlation ids in [879,881]
WARN
I see this MBean:
kafka.server:name=AllTopicsMessagesInPerSec,type=BrokerTopicMetrics
Does it return number per broker or per cluster? If it's per broker how to
get global value per cluster and vice versa?
Thanks.
publishing to and consumption from the partition will halt
and will not resume until the faulty leader node recovers
Can you confirm that's the case? I think they won't wait until leader
recovered and will try to elect new leader from existing non-ISR replicas?
And in case if they wait, and
If i set request.required.acks to -1, and set relatively short
request.timeout.ms and timeout happens before all replicas acknowledge the
write - would be message written to the leader or dropped?
.
Guozhang
On Thu, Oct 24, 2013 at 6:50 PM, Kane Kane kane.ist...@gmail.com
wrote:
If i set request.required.acks to -1, and set relatively short
request.timeout.ms and timeout happens before all replicas acknowledge
the
write - would be message written to the leader or dropped
yes, but this is partition reassignment which
completes when all the reassigned replicas are in sync with the
original replica(s). You can check the status of the command using the
option I mentioned earlier.
On Tue, Oct 15, 2013 at 7:02 PM, Kane Kane kane.ist...@gmail.com wrote:
I thought
On Wed, Oct 16, 2013 at 12:15 AM, Kane Kane kane.ist...@gmail.com wrote:
Oh i see, what is the better way to initiate the leader change? As I told
somehow all my partitions have the same leader for some reason. I have 3
brokers and all partitions have leader on single one.
On Wed, Oct 16
Thanks for advise!
On Wed, Oct 16, 2013 at 7:57 AM, Jun Rao jun...@gmail.com wrote:
Make sure that there is no under replicated partitions (use the
--under-replicated option in the list topic command) before you run that
tool.
Thanks,
Jun
On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane
Hello, as I understand send is not atomic, i.e. i have something like this
in my code:
val requests = new ArrayBuffer[KeyedMessage[AnyRef, AnyRef]]
for (message - messages) {
requests += new KeyedMessage(topic, null, message, message)
}
producer.send(requests)
That means
the request starting with the previous offsets again.
Guozhang
On Wed, Oct 16, 2013 at 8:56 AM, Kane Kane kane.ist...@gmail.com wrote:
Hello, as I understand send is not atomic, i.e. i have something like
this
in my code:
val requests = new ArrayBuffer[KeyedMessage[AnyRef, AnyRef
the
--status-check-json-file option of the reassign partitions command to
determine whether partition reassignment has completed or not.
Joel
On Tue, Oct 15, 2013 at 3:46 PM, Kane Kane kane.ist...@gmail.com wrote:
I have 3 brokers and a topic with replication factor of 3.
Somehow all partitions
After unclean shutdown kafka reports this error on startup:
[2013-10-14 16:44:24,898] FATAL Fatal error during KafkaServerStable
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.IllegalArgumentException: requirement failed: Corrupt index
found, index file
delete the corrupted index files, that will
make the Kafka server rebuild the index on startup.
Thanks,
Neha
On Mon, Oct 14, 2013 at 3:05 PM, Kane Kane kane.ist...@gmail.com wrote:
After unclean shutdown kafka reports this error on startup:
[2013-10-14 16:44:24,898] FATAL Fatal error during
Thanks!
On Mon, Oct 14, 2013 at 5:29 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
Just deleting the index files should fix this issue.
On Mon, Oct 14, 2013 at 5:23 PM, Kane Kane kane.ist...@gmail.com wrote:
Thanks a lot Neha, so I have to delete only index files not log files
, 2013 at 3:05 PM, Kane Kane kane.ist...@gmail.com wrote:
After unclean shutdown kafka reports this error on startup:
[2013-10-14 16:44:24,898] FATAL Fatal error during KafkaServerStable
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.IllegalArgumentException
I couldn't reproduce it yet, I'm rolling out the fresh install and trying
to do so.
On Mon, Oct 14, 2013 at 8:40 PM, Jun Rao jun...@gmail.com wrote:
Is this issue reproducible?
Thanks,
Jun
On Mon, Oct 14, 2013 at 8:30 PM, Kane Kane kane.ist...@gmail.com wrote:
Yes, my expectation
I'm also curious to know what is the limiting factor of kafka write
throughput?
I've never seen reports higher than 100mb/sec, obviously disks can provide
much more. In my own test with single broker, single partition, single
replica:
bin/kafka-producer-perf-test.sh --topics perf --threads 10
, Kane Kane kane.ist...@gmail.com wrote:
But it looks like some clients don't implement it?
of
SimpleConsumer api, but I see now that everything should implemented on the
client side.
Thanks.
On Tue, Oct 1, 2013 at 8:52 AM, Guozhang Wang wangg...@gmail.com wrote:
I do not understand your question, what are you trying to implement?
On Tue, Oct 1, 2013 at 8:42 AM, Kane Kane kane.ist...@gmail.com
-
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPIand
this is being planned for the 0.9 release. Once this is complete, the
non-java clients can easily leverage the group management feature.
Thanks,
Neha
On Tue, Oct 1, 2013 at 8:56 AM, Kane Kane
/display/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPIand
this is being planned for the 0.9 release. Once this is complete, the
non-java clients can easily leverage the group management feature.
Thanks,
Neha
On Tue, Oct 1, 2013 at 8:56 AM, Kane Kane kane.ist...@gmail.com wrote
11:56 AM, Kane Kane wrote:
The reason i was asking is that this library seems to have support only
for
SimpleConsumer
https://github.com/mumrah/**kafka-python/https://github.com/mumrah/kafka-python/,
i was curious if
all should be implemented on client or kafka has some rebalancing logic
62 matches
Mail list logo