I would write a python client that writes dummy data to kafka to measure how 
fast you can write to Kafka without MongoDB in the mix.  I've been doing load 
testing recently can with 3 brokers I can write 100MB/s (using Java clients).

-Dave

-----Original Message-----
From: Dominik Safaric [mailto:dominiksafa...@gmail.com]
Sent: Thursday, August 25, 2016 11:51 AM
To: users@kafka.apache.org
Subject: Re: Kafka Producer performance - 400GB of transfer on single instance 
taking > 72 hours?

Dear Dana,

> I would recommend
> other tools for bulk transfers.


What tools/languages would you rather recommend then using Python?

I could for sure accomplish the same by using the native Java Kafka Producer 
API, but should this really affect the performance under the assumption that 
the Kafka configuration stays as is?

> On 25 Aug 2016, at 18:43, Dana Powers <dana.pow...@gmail.com> wrote:
>
> python is generally restricted to a single CPU, and kafka-python will
> max out a single CPU well before it maxes a network card. I would
> recommend other tools for bulk transfers. Otherwise you may find that
> partitioning your data set and running separate python processes for
> each will increase the overall CPU available and therefore the throughput.
>
> One day I will spend time improving the CPU performance of
> kafka-python, but probably not in the near term.
>
> -Dana

This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.

Reply via email to