[ 
https://issues.apache.org/jira/browse/KAFKA-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flower.min updated KAFKA-8106:
------------------------------
    Description:     (was:        We do performance testing about kafka in 
specific scenarios as described below .We build a kafka cluster with one 
broker,and create topics with different number of partitions;then we start lots 
of producer processes to send large amounts of messages to one of the topics at 
one  testing .

*_specific scenario_*
 # *_server :_* cpu:2*16 ; MemTotal : 256G,Ethernet controller:Intel 
Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection ; SSD.
 # _*Topics :*_  topic1:50 partitions,topic2:100partitions,topic3:200 
partitions,...,2000 partiitons
 # _*size of Single Message* :_ 1024B

*_Config of KafkaProducer :_ __* **
 # _*compression.type*:_ lz4
 # _*linger.ms*:_ 1000ms/2000ms/5000ms
 # *_batch.size:_* _1_6384B/10240B/102400B
 # _*buffer.memory:*_ 134217728B

*_The best result of performance testing:_*
 # *_Pe_r*_*formance*:_23000000 messages/s.
 # *_Resource usage:_* Network inflow rate : 550M/s~610MB/s,CPU(%) : 
97%~99%,Disk write speed:550M/s~610MB/s .

_*Phenomenon and  my doubt:*_

      _**      The upper limit of CPU usage has been reached  But  it does not 
reach the upper limit of the bandwidth of the server  network. We are doubtful 
about which  cost too much CPU time and we want to Improve  performance and 
reduces CPU usage of kafka server._

       _**_       

 

 

 )

> Remove unnecessary decompression operation when logValidator  do validation.
> ----------------------------------------------------------------------------
>
>                 Key: KAFKA-8106
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8106
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients, core
>    Affects Versions: 2.1.1
>            Reporter: Flower.min
>            Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to