"Tom Lane" <[EMAIL PROTECTED]> writes:

> Gregory Stark <[EMAIL PROTECTED]> writes:
>> "Andrew Dunstan" <[EMAIL PROTECTED]> writes:
>>> Interesting. Maybe forever is going a bit too far, but retrying for <n>
>>> seconds or so.
>
>> I think looping forever is the right thing. Having a fixed timeout just means
>> Postgres will break sometimes instead of all the time. And it introduces
>> non-deterministic behaviour too.
>
> Looping forever would be considered broken by a very large fraction of
> the community.

Really? I understood we're talking about having Postgres fail with an error if
any of its files are opened by another program such as backup software. So
with a 30s limit it means Postgres might or might not fail depending on how
long this other software has the file open. That doesn't seem like an
improvement.

>
> IIRC we have a 30-second timeout in rename() for Windows, and that seems
> to be working well enough, so I'd be inclined to copy the behavior for
> this case.

I thought it was unlink, and the worst-case there is that we leak a file until
some later time. I'm wasn't exactly following that case though.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to