Hi,
I sent this as JIRA here (1) but no response so far.
I tried it with 0.8.3 instead of 0.8.1.1 and the behaviour seems to be the same.
I think that the main problem is that I am instantiating consumers
very quickly one after another in a loop an it seems that the
automatic rebalancing does
Great! I'd love to see this move forward, especially if the design allows
for per-key conditionals sometime in the future – doesn't have to be in the
first iteration.
On Tue, Jul 14, 2015 at 5:26 AM Ben Kirwin b...@kirw.in wrote:
Ah, just saw this. I actually just submitted a patch this evening
Thanks, I'm on 0.8.2 so that explains it.
Should retention.ms affect segment rolling? In my experiment it did (
retention.ms = -1), which was unexpected since I thought only segment.bytes
and segment.ms would control that.
On Mon, Jul 13, 2015 at 7:57 PM, Daniel Tamai daniel.ta...@gmail.com
Yes, A list of Kafka Server host/port pairs to use for establishing the
initial connection to the Kafka cluster
https://kafka.apache.org/documentation.html#newproducerconfigs
On Tue, Jul 14, 2015 at 7:29 PM, Yuheng Du yuheng.du.h...@gmail.com wrote:
Does anyone know what is bootstrap.servers=
You can also checkout Klogger (https://github.com/blackberry/Klogger), which
will take input from a TCP port or a file.
Todd.
-Original Message-
From: Jason Gustafson [mailto:ja...@confluent.io]
Sent: Monday, July 13, 2015 20:09
To: users@kafka.apache.org
Subject: Re: Kafka producer
I am trying to replace ActiveMQ with Kafka in our environment however I
have encountered a strange problem that basically prevents from using Kafka
in production. The problem is that sometimes the offsets are not committed.
I am using Kafka 0.8.2.1, offset storage = kafka, high level consumer,
Thanks. If I set the acks=1 in the producer config options in
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test7 5000 100 -1 acks=1 bootstrap.servers=
esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196?
Does that mean for each message
Also, I guess setting the target throughput to -1 means let it be as high
as possible?
On Tue, Jul 14, 2015 at 10:36 AM, Yuheng Du yuheng.du.h...@gmail.com
wrote:
Thanks. If I set the acks=1 in the producer config options in
bin/kafka-run-class.sh
Can you take a look at the kafka commit rate mbean on your consumer?
Also, can you consume the offsets topic while you are committing
offsets and see if/what offsets are getting committed?
(http://www.slideshare.net/jjkoshy/offset-management-in-kafka/32)
Thanks,
Joel
On Tue, Jul 14, 2015 at
Thanks, Joel, I will but regardless of my findings the basic problem will
still be there: there is no guarantee that the offsets will be committed
after commitOffsets. Because commitOffsets does not return its exit status,
nor does it block as I understand until offsets are committed. In other
Does anyone know what is bootstrap.servers=
esv4-hcl198.grid.linkedin.com:9092 means in the following test command:
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test7 5000 100 -1 acks=1 bootstrap.servers=
esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864
@Gwen
I am having a very very similar issue where I am attempting to send a
rather small message and it's blowing up on me (my specific error is:
Invalid receive (size = 1347375956 larger than 104857600)). I tried to
change the relevant settings but it seems that this particular request is
of 1340
This is interesting. We have seen something similar internally at LinkedIn
with one particular topic (and Avro schema), and only once in a while.
We've seen it happen 2 or 3 times so far. We had chalked it up to bad
content in the message, figuring that the sender was doing something like
sending
Hmm..yeah some error logs would be nice like Gwen pointed out. Do any of
your brokers fall out of the ISR when sending messages? It seems like your
setup should be fine, so I'm not entirely sure.
On Tue, Jul 14, 2015 at 1:31 PM, Yuheng Du yuheng.du.h...@gmail.com wrote:
Jiefu,
I am performing
Hi,
I'm playing with kafka new producer to see if it could fit my use case,
kafka version 8.2.1
I'll probably end up having a kafka cluster of 5 nodes on multiple
datacenter
with one topic, with a replication factor of 2, and at least 10 partitions
required for consumer performance , ( I'll
Someone may correct me if I am incorrect, but how much disk space do you
have on these nodes? Your exception 'No space left on device' from one of
your brokers seems to suggest that you're full (after all you're writing
500 million records). If this is the case I believe the expected behavior
for
I checked the logs on the brokers, it seems that the zookeeper or the kafka
server process is not running on this broker...Thank you guys. I will see
if it happens again.
On Tue, Jul 14, 2015 at 4:53 PM, JIEFU GONG jg...@berkeley.edu wrote:
Hmm..yeah some error logs would be nice like Gwen
I think ProducerPerformance microbenchmark only measure between client to
brokers(producer to brokers) and provide latency information.
On Tue, Jul 14, 2015 at 11:05 AM, Yuheng Du yuheng.du.h...@gmail.com
wrote:
Currently, the latency test from kafka test the end to end latency between
Also, the log in another broker (not the bootstrap) says:
[2015-07-14 15:18:41,220] FATAL [Replica Manager on Broker 1]: Error
writing to highwatermark file: (kafka.server.ReplicaManager)
[2015-07-14 15:18:40,160] ERROR Closing socket for /130.127.133.47 because
of error (kafka.network.Process
Hi Jiefu, Gwen,
I am running the Throughput versus stored data test:
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test 500 100 -1 acks=1 bootstrap.servers=
esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196
After around 50,000,000
But is there a way to let kafka override the old data if the disk is
filled? Or is it not necessary to use this figure? Thanks.
On Tue, Jul 14, 2015 at 10:14 PM, Yuheng Du yuheng.du.h...@gmail.com
wrote:
Jiefu,
I agree with you. I checked the hardware specs of my machines, each one of
them
Jiefu,
I agree with you. I checked the hardware specs of my machines, each one of
them has:
RAM
256GB ECC Memory (16x 16 GB DDR4 1600MT/s dual rank RDIMMs
Disk
Two 1 TB 7.2K RPM 3G SATA HDDs
For the throughput versus stored data test, it uses 5*10^10 messages, which
has the total volume
@Jiefu, yes! The patch is functional, I think it's just waiting on a bit of
final review after the last round of changes. You can definitely use it for
your own benchmarking, and we'd love to see patches for any additional
tests we missed in the first pass!
-Ewen
On Tue, Jul 14, 2015 at 10:53
The OOME issue may be caused
by org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holding
unnecessary byte[] value. Can you apply the patch in below JIRA and try
again?
https://issues.apache.org/jira/browse/KAFKA-2281
On Wed, 15 Jul 2015 at 06:42 francesco vigotti
Hi Team,
Currently, I am able to fetch the Topic,Partition,Leader,Log Size through
TopicMetadataRequest API available in Kafka.
Is there any java api that gives me the consumer groups?
Best Regards,
Swati Suman
I've not explained why only 10 partitions, anyway this is due to the fact
that this does not speedup producer and also having this memory-monitoring
problem and because I have no problems on the consumers side at the moment
(10 should be enough even if I've not fully tested it yet ) and because
Jiefu,
Now even if the disk space is enough (less than 18%), when I run
it still gives me error where in the logs it says:
[2015-07-14 23:08:48,735] FATAL Fatal error during KafkaServerStartable
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
Hi,
I am running the performance test for kafka. https://gist.github.com/jkreps
/c7ddb4041ef62a900e6c
For the Three Producers, 3x async replication scenario, the command is
the same as one producer:
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test 5000 100 -1
So I'm trying to make a request with a simple ASCII text file, but what's
strange is even if I change files to send or the contents of the file I get
the same error message, even specifically the number of bytes of the
message which seems weird if I'm changing the content? Should I be using
Avro
I am not familiar with Apache Bench. Can you share more details on
what you are doing?
On Tue, Jul 14, 2015 at 11:45 AM, JIEFU GONG jg...@berkeley.edu wrote:
So I'm trying to make a request with a simple ASCII text file, but what's
strange is even if I change files to send or the contents of
Yuheng,
Yes, if you read the blog post it specifies that he's using three separate
machines. There's no reason the producers cannot be started at the same
time, I believe.
On Tue, Jul 14, 2015 at 11:42 AM, Yuheng Du yuheng.du.h...@gmail.com
wrote:
Hi,
I am running the performance test for
You need to run 3 of those at the same time. We don't expect any
errors, but if you run into anything, let us know and we'll try to
help.
Gwen
On Tue, Jul 14, 2015 at 11:42 AM, Yuheng Du yuheng.du.h...@gmail.com wrote:
Hi,
I am running the performance test for kafka.
This is almost certainly a client bug. Kafka's request format is size
delimited messages in the form
4 byte size NN byte payload
If the client sends a request with an invalid size or sends a partial
request the server will see effectively random bytes from the next request
as the size of the
Jiefu,
Thank you. The three producers can run at the same time. I mean should they
be started at exactly the same time? (I have three consoles from each of
the three machines and I just start the console command manually one by
one) Or a small variation of the starting time won't matter?
Gwen
Actually, how are you committing offsets? Are you using the old
(zookeeperconsumerconnector) or new KafkaConsumer?
It is true that the current APIs don't return any result, but it would
help to check if anything is getting into the offsets topic - unless
you are seeing errors in the logs, the
I am using ZookeeperConsumerConnector
actually I set up a consumer for __consumer_offsets the way you suggested
and now I cannot reproduce the situation any longer. Offsets are committed
every time.
On Tue, Jul 14, 2015 at 1:49 PM, Joel Koshy jjkosh...@gmail.com wrote:
Actually, how are you
just caught this error again. I issue commitOffsets - no error but no
committng offsets either. __consumer_offsets watching shows no new messages
either. Then in a few minutes I issue commitOffsets again - all committed.
Unless I am doing something terribly wrong this is very unreliable
On Tue,
Yuheng,
I would recommend looking here:
http://kafka.apache.org/documentation.html#brokerconfigs and scrolling down
to get a better understanding of the default settings and what they mean --
it'll tell you what different options for acks does.
Ewen,
Thank you immensely for your thoughts, they
Currently, the latency test from kafka test the end to end latency between
producers and consumers.
Is there a way to test the producer to broker and broker to consumer
delay seperately?
Thanks.
39 matches
Mail list logo