On Sat, Jan 9, 2016 at 6:26 PM, Andres Freund <and...@anarazel.de> wrote:
> On 2016-01-09 18:24:01 +0530, Amit Kapila wrote:
> > Okay, but I think that is the reason why you are worried that it is
> > to issue sync_file_range() on a closed file, is that right or am I
> > something?
> That's one potential issue. You can also fsync a different file, try to
> print an error message containing an unallocated filename (that's how I
> noticed the issue in the first place)...
> I don't think it's going to be acceptable to issue operations on more or
> less random fds, even if that operation is hopefully harmless.

Right that won't be acceptable, however I think with your latest
proposal [1], we might not need to solve this problem or do we still
need to address it.  I think that idea will help to mitigate the problem of
backend and bgwriter writes as well.  In that, can't we do it with the
help of existing infrastructure of *pendingOpsTable* and
*CheckpointerShmem->requests[]*, as already the flush requests are
remembered in those structures, we can use those to apply your idea
to issue flush requests.

"It seems better to track a fixed
number of outstanding 'block flushes', independent of the file. Whenever
the number of outstanding blocks is exceeded, sort that list, and flush
all outstanding flush requests after merging neighbouring flushes."

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Reply via email to