Hello Andres,

I can think of a number of relatively easy ways to address this:
1) Just zap (or issue?) all pending flush requests when getting an
2) Do 1), but filter for the closed relnode
3) Actually handle the case of the last open segment not being
  RELSEG_SIZE properly in _mdfd_getseg() - mdnblocks() does so.

I'm kind of inclined to do both 3) and 1).

Alas I'm a little bit outside my area of expertise. Just for suggestion purpose, possibly totally wrong (please accept my apology if this is the case), the ideas I had while thinking about these issues, some may be quite close to your suggestions above:

 - keep track of file/segment closing/reopenings (say with a counter), so
   that if a flush is about a file/segment which has been closed or
   reopened, it is just skipped. I'm not sure this is enough, because one
   process may do the file cleaning and another may want to flush, although
   I guess there is some kind of locking/shared data structures to manage
   such interactions between pg processes.

 - because of "empty the bucket when filled behavior" of the current
   implementation, a buffer to be flused may be kept for a very
   long time in the bucket. I think that flushing advices become stale
   after a while so should not be issued (the buffer may have been
   written again, ...), or the bucket should be flushed after a while
   even of not full.

Also, a detail in "pg_flush_data", there are a serie of three if/endif, but it is not obvious to me whether they are mutually exclusive while looking at the source, so I was wondering whether the code could try several flushings approaches one after the other... This is probably not the case, but I would be more at ease with a if/elif/elif/endif structure there so that is clearer.


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to