On 2016-01-07 13:07:33 +0100, Fabien COELHO wrote:
> 
> >>Yes, I'm planning to try to do the minimum possible damage to the current
> >>API to fix the issue.
> >
> >What's your thought there? Afaics it's infeasible to do the flushing tat
> >the fd.c level.
> 
> I thought of adding a pointer to the current flush structure at the vfd
> level, so that on closing a file with a flush in progress the flush can be
> done and the structure properly cleaned up, hence later the checkpointer
> would see a clean thing and be able to skip it instead of generating flushes
> on a closed file or on a different file...
> 
> Maybe I'm missing something, but that is the plan I had in mind.

That might work, although it'd not be pretty (not fatally so
though). But I'm inclined to go a different way: I think it's a mistake
to do flusing based on a single file. It seems better to track a fixed
number of outstanding 'block flushes', independent of the file. Whenever
the number of outstanding blocks is exceeded, sort that list, and flush
all outstanding flush requests after merging neighbouring flushes. Imo
that means that we'd better track writes on a relfilenode + block number
level.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to