Yeah okay never mind I misunderstood. I'm not sure of the cause either--I
didn't see that in my testing.
The spindle thing tends to get overestimated since our writes are async.
There is some perf hit from having multiple partitions (maybe 10-20%) but
mostly the OS does a good job of scheduling wr
Hi Jay,
For issue #1, I will file a JIRA so the community or dev team can take
a look at it.
For my 2nd questions, the scenario goes like this:
#Test_1
$ bin/kafka-topics.sh --zookeeper 192.168.1.1:2181 --create --topic
test1 --partitions 3 --replication-factor 1
Created topic "test1".
$ time
In general increasing message size should increase bytes/sec throughput
since much of the work is on a per-message basis. I think the question
remains why raising the buffer size with fixed message size would drop the
throughput. Sounds like a bug if you can reproduce it consistently. Want to
file
H Jay,
Thanks for the response.
The second command was indeed a typo. It should have been
bin/kafka-run-class.sh
org.apache.kafka.clients.tools.ProducerPerformance test1 5000 100
-1 acks=1 bootstrap.servers=192.168.1.1:9092 buffer.memory=134217728
batch.size=8192
And the throughput would d
The second command you give actually doesn't seem to double the memory
(maybe just a typo?). I can't explain why doubling buffer memory would
decrease throughput. The only effect of adding memory would be if you run
out, and then running out of memory would cause you to block and hence
lower throug