On Monday, March 26, 2018 at 7:26:03 PM UTC-7, Juan M. Cuello wrote:
>
> I´m using postgres with around 4000 schemas. Each request is handled by a 
> forked unicorn process. Each process uses a connection pool of just 1 
> connection (max_connections = 1). I use a pool of just 1 connection because 
> i need to be sure the same connection will be used along the request (the 
> first executed statement sets search_path to the schema that will be used 
> for subsequent statements).
>
> At certain moments, new schemas are created in the database. When this 
> happens, unicorn processes seem to hang and are then killed by the master 
> unicorn process due to taking too long to process the request (>30 secs, 
> which is the configured timeout). At the same time, I see postgres proceses 
> consume a lot of DB server CPU. As unicorn processes are killed, db 
> connections are closed. Then, new unicorn processes are created and new db 
> connections established, DB CPU returns to normal levels and requests are 
> processed as usual.
>
> I wonder if for some reason, due to the high amount of schemes, Sequel 
> connection could be having troubles when new schemas are added to the 
> database that cause the application being unable to keep using the only 
> available db connection, causing the request to hang until it is eventually 
> killed.
>

It's hard to say what the problem actually is without a reproducible 
example.  I'm not sure why adding a schema would cause excessive query 
time, but the issue may be more related to PostgreSQL than Sequel.

Thanks,
Jeremy 

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sequel-talk.
For more options, visit https://groups.google.com/d/optout.

Reply via email to