I'm wondering what I can tweak further to increase this. I was reading in this 
blog: https://data-artisans.com/extending-the-yahoo-streaming-benchmark/
about 3 millions per sec with only 20 partitions. So i'm sure I should be able 
to squeeze out more out of it.

Not really sure if it is relevant under the context of your case, but you could 
perhaps try tweaking the maximum size of Kafka records fetched on each poll on 
the partitions.
You can do this by setting a higher value for “max.partition.fetch.bytes” in 
the provided config properties when instantiating the consumer; that will 
directly configure the internal Kafka clients.
Generally, all Kafka settings are applicable through the provided config 
properties, so you can perhaps take a look at the Kafka docs to see what else 
there is to tune for the clients.

On March 30, 2017 at 6:11:27 PM, Kamil Dziublinski 
(kamil.dziublin...@gmail.com) wrote:

I'm wondering what I can tweak further to increase this. I was reading in this 
blog: https://data-artisans.com/extending-the-yahoo-streaming-benchmark/
about 3 millions per sec with only 20 partitions. So i'm sure I should be able 
to squeeze out more out of it.

Reply via email to