Hi Jeremy,

Yes we already have statement timeout set, it just does not trigger
under extreme load cause its usually a piling of papercuts and not one
big nasty query, 26 seconds in you issue yet another query and...
boom. I guess we could prepend a custom exec that times out early and
re-sets it per statement.

Thanks for the suggestion of the info file, I will consider adding
something like it.

Sam

On Mon, Jan 15, 2018 at 1:40 PM, Jeremy Evans <[email protected]> wrote:
> On Sun, Jan 14, 2018 at 6:18 PM, Sam Saffron <[email protected]> wrote:
>>
>> It is super likely this could be handled in the app if we had:
>>
>> db_connection.timeout_at Time.now + 29
>>
>> Then the connection could trigger the timeout and kill off the request
>> without needing to tear down the entire process and re-forking.
>>
>> Making this happen is a bit tricky cause it would require some hacking
>> on the pg gem.
>
>
> Sam,
>
> For this particular case, you can SET statement_timeout (29000) on the
> PostgreSQL connection, but note that is a per statement timeout, not a per
> web request timeout.
>
> What I end up doing for important production apps is have the worker log
> request information for each request to an open file descriptor
> (seek(0)/write/truncate), and having the master process use the contents of
> the worker-specific file to report errors in case of worker crashes.
>
> Thanks,
> Jeremy
>

Reply via email to