On Monday, 28 October 2024 at 20:53:32 UTC, Salih Dincer wrote:
On Monday, 28 October 2024 at 19:57:41 UTC, Kyle Ingraham wrote:

- Polling too little killed performance and too often wrecked CPU usage.
- Using message passing reduced performance quite a bit.
- Batching reads was hard because it was tricky balancing performance for single requests with performance for streams of them.

Semaphore?

https://demirten-gitbooks-io.translate.goog/linux-sistem-programlama/content/semaphore/operations.html?_x_tr_sl=tr&_x_tr_tl=en&_x_tr_hl=tr&_x_tr_pto=wapp

SDB@79

I went back to try using a semaphore and ended up using a mutex, an event, and a lock-free queue. My aim was to limit the amount of vibe.d events emitted to hopefully limit event loop overhead. It works as follows:

- Requests come in on the Unit thread and are added to the lock-free queue. - The Unit thread tries to obtain the mutex. If it cannot, it assumes request processing is in progress on the vibe.d thread and does not emit an event. - In the vibe.d thread it waits on an event. Once it arrives, it obtains the mutex and pulls from the lock-free queue until it is empty. - Once the queue is empty the vibe.d thread releases the mutex and waits for another event.

This approach increased requests processed per events emitted/waited from 1:1 to 10:1. This had no impact on event loop overhead however. The entire program still spends ~50% of its runtime in this function: https://github.com/vibe-d/eventcore/blob/0cdddc475965824f32d32c9e4a1dfa58bd616cc9/source/eventcore/drivers/posix/cfrunloop.d#L38. I'll see if I can get images here of my profiling. I'm sure I'm missing something obvious here.

Reply via email to