On 8/14/2020 9:04 AM, Ben Spencer wrote:
After a little investigation I didn't find any recent information on how well / linearly 389 scales from a CPU perspective. I also realize this is a more complicated topic with many factors which actually play into it.

Throwing the basic question out there: Does 389 scale fairly linearly as the number of CPUs are increased? Is there a point where it drops off?

Cached reads (cached anywhere : filesystem cache, db page pool, entry cache) should scale quite well, at least to 4/6/8 CPU. I'm not sure about today's 8+ CPU systems but would assume probably not great scaling beyond 8 until proven otherwise.

Writes are going to be heavily serialized, assume no CPU scaling. Fast I/O is what you need for write throughput.

Where am I going with this?
We are faced with either adding more CPUs to the existing servers or adding more instances or a combination of the two. The current servers have 10 CPU with the entire database fitting in RAM but, there is a regular flow of writes. Sometimes somewhat heavy thanks to batch updates. Gut feeling tells me to have more servers than a few huge servers largely because of the writes/updates and lock contention. Needing to balance the server sprawl as well.

I'd look at whether I/O throughput (Write IOPS particularly) can be upgraded as a first step. Then perhaps look at system design to see if the batch updates can be throttled/trickled to reduce the cross-traffic interference. Usually the write load is the limiting factor scaling because it has to be replayed on every server regardless of its read workload.

_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]

Reply via email to