> On Sun, Apr 13, 2025 at 04:59:54AM GMT, Thomas Munro wrote: > It's hard to know how to set io_workers=3. If it's too small, > io_method=worker's small submission queue overflows and it silently > falls back to synchronous IO. If it's too high, it generates a lot of > pointless wakeups and scheduling overhead, which might be considered > an independent problem or not, but having the right size pool > certainly mitigates it. Here's a patch to replace that GUC with: > > io_min_workers=1 > io_max_workers=8 > io_worker_idle_timeout=60s > io_worker_launch_interval=500ms > > It grows the pool when a backlog is detected (better ideas for this > logic welcome), and lets idle workers time out.
I like the idea. In fact, I've been pondering about something like a "smart" configuration for quite some time, and convinced that a similar approach needs to be applied to many performance-related GUCs. Idle timeout and launch interval serving as a measure of sensitivity makes sense to me, growing the pool when a backlog (queue_depth > nworkers, so even a slightest backlog?) is detected seems to be somewhat arbitrary. From what I understand the pool growing velocity is constant and do not depend on the worker demand (i.e. queue_depth)? It may sounds fancy, but I've got an impression it should be possible to apply what's called a "low-pass filter" in the control theory (sort of a transfer function with an exponential decay) to smooth out the demand and adjust the worker pool based on that. As a side note, it might be far fetched, but there are instruments in queueing theory to figure out how much workers are needed to guarantee a certain low queueing probability, but for that one needs to have an average arrival rate (in our case, average number of IO operations dispatched to workers) and an average service rate (average number of IO operations performed by workers).