got amazing results . thanks
On Tue, Jul 9, 2013 at 9:11 AM, Jun Rao wrote:
> Try producing at least 100s MB of data and rerun you test.
>
> Thanks,
>
> Jun
>
>
> On Mon, Jul 8, 2013 at 8:04 AM, Anurup Raveendran <
> anurup.raveend...@fluturasolutions.com> wrote:
>
> > I have 2 kafka brokers
Hi Jun,
Please see my comments inline again :)
On 10-Jul-2013, at 9:13 AM, Jun Rao wrote:
> This indicates our in-memory queue is empty. So the consumer thread is
> blocked.
What should we do about this.
As I mentioned in the previous mail, events are there to be consumed.
Killing one consumer
As far as I am aware it is not possible to resize mapped buffer without
unmapping in Windows. W.r.t Java the bug here gives more context on why it
does not support synchronous unmap function.
http://bugs.sun.com/view_bug.do?bug_id=4724038
On 7/9/13 9:54 PM, "Jay Kreps" wrote:
>The problem app
Ah, good question we really should add this to the documentation.
We run a cluster per data center. All writes always go to the data-center
local cluster. Replication to aggregate clusters that provide the "world
wide" view is done with mirror maker.
It is also fine to write to or read from a kaf
The problem appears to be that we are resizing a memory mapped file which
it looks like windows does not allow (which is kind of sucky).
The offending method is OffsetIndex.resize().
The most obvious fix would be to first unmap the file, then resize, then
remap it. We can't do this though because
Folks,
Our application has multiple producers globally (region1, region2,
region3). If we group all the brokers together into one cluster, we notice
an obvious network latency if a broker replicates regionally with the
request.required.acks = -1.
Is there any best practice for combating the
For 1, you will get a response with an error.
For 2, a partition # has to be specified. If it is incorrect, you will get
a response with an error.
Thanks,
Jun
On Tue, Jul 9, 2013 at 11:58 AM, Vinicius Carvalho <
viniciusccarva...@gmail.com> wrote:
> Hi there. I'm working on a 0.8 version of t
Any error in the producer and the broker log (including state-change.log)?
Thanks,
Jun
On Tue, Jul 9, 2013 at 12:31 PM, arathi maddula wrote:
> Hi,
>
>
>
> I use kafka 0.8. When I run the kafka console producer using
>
>
>
> ./kafka-console-producer.sh --topic test.compress.e --compress true
The reconnect is due to the socket connection to Kafka broker. Technically,
reconnect worked. So the tool still produces the output. The reconnection
is weird though. Do you see the same reconnection when you run console
consumer?
Thanks,
Jun
On Tue, Jul 9, 2013 at 11:06 AM, Dennis Haller wrote
Hmm, not sure what the issue is. Any windows user wants to chime in?
Thanks,
Jun
On Tue, Jul 9, 2013 at 9:00 AM, Denny Lee wrote:
> Hey Jun,
>
> We've been running into this issue when running perf.Performance as per
> http://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/.
> W
This indicates our in-memory queue is empty. So the consumer thread is
blocked. What about the Kafka fetcher threads? Are they blocked on anything?
Thanks,
Jun
On Tue, Jul 9, 2013 at 8:37 AM, Nihit Purwar wrote:
> Hello Jun,
>
> Please see my comments inline.
>
> On 09-Jul-2013, at 8:32 PM, J
I will try to reproduce it. it was sporadic. My set up was a topic with 1
partition and replication factor = 3.
If i kill the console producer and then shut down the leader broker, a new
leader is elected. If I again kill the new lead, I dont see the last broker
be elected as a leader. Then i tried
Hi,
I use kafka 0.8. When I run the kafka console producer using
./kafka-console-producer.sh --topic test.compress.e --compress true
--broker-list 127.0.0.1:9092
Iam able to see compressed messages in the log.
But when I run a Java producer class using the following properties, no
mess
Not really - if you shutdown a leader broker (and assuming your
replication factor is > 1) then the other assigned replica will be
elected as the new leader. The producer would then look up metadata,
find the new leader and send requests to it. What do you see in the
logs?
Joel
On Tue, Jul 9, 201
Hey Chris,
The way I handled this in my application using the High Level Consumer was to
turn off auto-commit and commit manually after finishing a batch of messages
(obviously you could do it after every message, but for my purposes it was
better to have batches)
--
Ian Friedman
On Tuesd
Thanks you have me enough pointers to dig deeper. And I tested the fault
tolerance by shutting down brokers randomly.
What I noticed is if I shutdown brokers while my producer and consumer are
still running, they recover fine. However, if I shutdown a lead broker
without a running producer, I can'
Enhancement submitted: https://issues.apache.org/jira/browse/KAFKA-966
On Tue, Jul 9, 2013 at 3:53 PM, Chris Curtin wrote:
> Thanks. I know I can write a SimpleConsumer to do this, but it feels like
> the High Level consumer is _so_ close to being robust enough tohandle
> what I'd think pe
Thanks. I know I can write a SimpleConsumer to do this, but it feels like
the High Level consumer is _so_ close to being robust enough tohandle
what I'd think people want to do in most applications. I'm going to submit
an enhancement request.
I'm trying to understand the level of data loss in
OK.
It sounds like you're requesting functionality that the high-level consumer
simply doesn't have. As I am sure you know, there is no API call that
supports "handing back a message".
I might be missing something, but if you need this kind of control, I think
you need to code your application di
Hi there. I'm working on a 0.8 version of the protocol for nodejs. And when
thing that I'm not clear from the docs:
"The client will likely need to maintain a connection to multiple brokers,
as data is partitioned and the clients will need to talk to the server that
has their data"
Ok, so far so
Hi Philip,
Correct, I don't want to explicitly control the offset committing. The
ConsumerConnector handles that well enough except for when I want to
shutdown and NOT have Kafka think I consumed that last message for a
stream. This isn't the crash case, it is a case where the logic consuming
the
Hi,
Thanks for your reply.
I can't see anything in the broker logs, , except this keeps coming up in
the zookeeper log (instance 1 out of 3):
WARN [SyncThread:1:FileTxnLog@321] - fsync-ing the write ahead log in
SyncThread:1 took 1298ms which will adversely effect operation latency. See
the ZooK
Hey Jun,
We've been running into this issue when running perf.Performance as per
http://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/.
When running it using 100K messages, it works fine on Windows with about
20-30K msg/s. But when running it with 1M messages, then the broker fail
Hello Jun,
Please see my comments inline.
On 09-Jul-2013, at 8:32 PM, Jun Rao wrote:
> I assume that each consumer instance consumes all 15 topics.
No, we kept dedicated consumer listening to the topic in question.
We did this because this queue processes huge amounts of data.
> Are all your
It seems like you're not explicitly controlling the offsets. Is that
correct?
If so, the moment you pull a message from the stream, the client framework
considers it processed. So if your app subsequently crashes before the
message is fully processed, and "auto-commit" updates the offsets in
Zooke
I am using 0.7.0
Thanks,
Sujitha
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Tuesday, July 09, 2013 11:04 AM
To: users@kafka.apache.org
Subject: Re: Zookeeper vs statics list of brokers
For producers, if you specify ZK connection string, you don't need to specify
b
Hi,
I'm working through a production-level High Level Consumer app and have a
couple of error/shutdown questions to understand how the offset storage is
handled.
Test case - simulate an error writing to destination application, for
example a database, offset is 'lost'
Scenario
- write 500 messag
For producers, if you specify ZK connection string, you don't need to
specify broker.list. Are you using 0.7.2?
Thanks,
Jun
On Tue, Jul 9, 2013 at 6:37 AM, Nandigam, Sujitha wrote:
> For producer
>
> -Original Message-
> From: Jun Rao [mailto:jun...@gmail.com]
> Sent: Tuesday, July 09,
I assume that each consumer instance consumes all 15 topics. Are all your
consumer threads alive? If one of your thread dies, it will eventually
block the consumption in other threads.
Thanks,
Jun
On Tue, Jul 9, 2013 at 4:18 AM, Nihit Purwar wrote:
> Hi,
>
> We are using kafka-0.7.2 with zook
A couple of users seem to be able to get 0.8 working on Windows. Any thing
special about your Windows environment? Are you using any jvm plugins?
Thanks,
Jun
On Tue, Jul 9, 2013 at 12:59 AM, Timothy Chen wrote:
> Hi all,
>
> I've tried pushing a large amount of messages into Kafka on Windows,
For 1 I forgot to add - there is an admin tool to reassign replicas but it
would take longer than leader failover.
Joel
On Tuesday, July 9, 2013, Joel Koshy wrote:
> 1 - no, unless broker4 is not the preferred leader. (The preferred
> leader is the first broker in the assigned replica list). If
For producer
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Tuesday, July 09, 2013 12:00 AM
To: users@kafka.apache.org
Subject: Re: Zookeeper vs statics list of brokers
Is this for the producer or the consumer?
Thanks,
Jun
On Mon, Jul 8, 2013 at 10:43 AM, Nandigam,
Hi,
We are using kafka-0.7.2 with zookeeper (3.4.5)
Our cluster configuration:
3 brokers on 3 different machines. Each broker machine has a zookeeper instance
running as well.
We have 15 topics defined. We are trying to use them as queue (JMS like) by
defining the same group across different ka
Hi all,
I've tried pushing a large amount of messages into Kafka on Windows, and
got the following error:
Caused by: java.io.IOException: The requested operation cannot be performed
on a
file with a user-mapped section open
at java.io.RandomAccessFile.setLength(Native Method)
at
1 - no, unless broker4 is not the preferred leader. (The preferred
leader is the first broker in the assigned replica list). If a
non-preferred replica is the current leader you can run the
PreferredReplicaLeaderElection admin command to move the leader.
2 - The actual leader movement (on leader fa
35 matches
Mail list logo