lhotari commented on issue #14233:
URL: https://github.com/apache/pulsar/issues/14233#issuecomment-1038707611


   > The three components of zookeeper, bookkeeper and broker are deployed on 
the same server (8-core CPU, 16g memory and a 7200 RPM HDD)
   
   > Finally, my question is: in this case, is it because the write capacity of 
the hard disk of the pulsar server has reached the bottleneck that can not 
improve the throughput of the producer?
   
   It is crucial for performance to use fast SSD disks for Zookeeper and 
Bookkeeper. Using spinning HDD disks with Pulsar is not recommended since 
performance can suffer.
   
   If you cannot afford SSDs for all data, it's possible to use HDD disks for 
Bookkeeper ledger disks. Bookkeeper can use separate journal and ledger disks. 
When using HDD disks on bare metal, a battery backed write cache (BBWC) setup 
improves write throughput performance and reduces write latency drastically.
   
   Besides bottlenecks in Disk IO, there might be multiple reasons why you have 
performance problems. Jack Vanlightly has also blog posts about this in 
https://medium.com/splunk-maas/the-importance-of-mental-models-for-incident-response-da1bfcfdcacd
 . Having Prometheus/Grafana dashboards will help understand the bottlenecks 
and what is needed to resolve the issues.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to