On Mon, 2023-03-13 at 12:24 -0400, Harrison Borges wrote:
> I’m running into severe performance problems with Postgres as I increase the 
> number
> of concurrent requests against my backend. I’ve identified that the 
> bottleneck is
> Postgres, and to simplify the test case, I created an endpoint that only does 
> a
> count query on a table with ~500k rows. At 5 concurrent users, the response 
> time
> was 33ms, at 10 users it was 60ms, and at 20 users it was 120ms.
> 
> As the number of concurrent users increases, the response time for the count 
> query
> also increases significantly, indicating that Postgres may not be scaling 
> well to
> handle the increasing load. 
> 
> This manifests in essentially a server meltdown on production. As the 
> concurrent
> requests stack up, our server is stuck waiting for more and more queries.
> Eventually requests begin timing out as they start taking over 30 seconds to 
> respond.
> 
> Am I doing something obviously wrong? Does this sound like normal behavior?

That sounds like quite normal and expected behavior.

A query that counts the number of rows in a table of half a million rows is
quite expensive and keeps a CPU core busy for a while (provided everything is
cached). At some degree of parallelism, your CPU is overloaded, which leads
to non-linear slowdown.

The thing you are doing wrong is that you are putting too much load on this
system.

Yours,
Laurenz Albe


Reply via email to