Fix running out of file descriptors for spill files. Currently while decoding changes, if the number of changes exceeds a certain threshold, we spill those to disk. And this happens for each (sub)transaction. Now, while reading all these files, we don't close them until we read all the files. While reading these files, if the number of such files exceeds the maximum number of file descriptors, the operation errors out.
Use PathNameOpenFile interface to open these files as that internally has the mechanism to release kernel FDs as needed to get us under the max_safe_fds limit. Reported-by: Amit Khandekar Author: Amit Khandekar Reviewed-by: Amit Kapila Backpatch-through: 9.4 Discussion: https://postgr.es/m/caj3gd9c-secen79zxw4ybnbdottacoe-6gayp0oy60nfs_s...@mail.gmail.com Branch ------ REL9_4_STABLE Details ------- https://git.postgresql.org/pg/commitdiff/1ad47e8757bb95058e70fd00d5e619833c83df40 Modified Files -------------- src/backend/replication/logical/reorderbuffer.c | 40 +++++++++++++++---------- 1 file changed, 25 insertions(+), 15 deletions(-)
