Andres Freund <and...@2ndquadrant.com> writes:
> On 2014-01-21 18:24:39 -0500, Tom Lane wrote:
>> Maybe we could get some mileage out of the fact that very approximate
>> techniques would be good enough.  For instance, I doubt anyone would bleat
>> if the system insisted on having 10MB or even 100MB of future WAL space
>> always available.  But I'm not sure exactly how to make use of that
>> flexibility.

> If we'd be more aggressive with preallocating WAL files and doing so in
> the WAL writer, we could stop accepting writes in some common codepaths
> (e.g. nodeModifyTable.c) as soon as preallocating failed but continue to
> accept writes in other locations (e.g. TRUNCATE, DROP TABLE). That'd
> still fail if you write a *very* large commit record using up all the
> reserve though...

> I personally think this isn't worth complicating the code for.

I too have got doubts about whether a completely bulletproof solution
is practical.  (And as you say, even if our internal logic was
bulletproof, a COW filesystem defeats all guarantees in this area
anyway.)  But perhaps a 99% solution would be a useful compromise.

Another thing to think about is whether we couldn't put a hard limit on
WAL record size somehow.  Multi-megabyte WAL records are an abuse of the
design anyway, when you get right down to it.  So for example maybe we
could split up commit records, with most of the bulky information dumped
into separate records that appear before the "real commit".  This would
complicate replay --- in particular, if we abort the transaction after
writing a few such records, how does the replayer realize that it can
forget about those records?  But that sounds probably surmountable.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to