On 2016-04-13 18:09:18 -0400, Tom Lane wrote: > Andres Freund <and...@anarazel.de> writes: > > On 2016-04-13 17:44:41 -0400, Tom Lane wrote: > >> fd.c tracks seek position for open files. I'm not sure that that > >> function can get called with amount == 0, but if it did, the caller > >> would certainly not be expecting the file position to change. > > > Ok, fair enough. (And no, it should currently be never called that way) > > BTW, I just noticed another issue here, which is that FileWriteback > and the corresponding smgr routines are declared with bogusly narrow > "amount" arguments --- eg, it's silly that FileWriteback only takes > an int amount.
Well, I modeled it after the nearby routines (like FileRead), which all only take an amount in int. Now there might be less reason to read a lot of data at once, than to flush large amounts; but it still didn't seem necessary to break with the rest of the functions in the file. > I think this code could be actively broken for relation segment sizes > exceeding 2GB, and even if it isn't, we should define the functions > correctly the first time. I don't think it's actively problematic, ->max_pending (and thus nr_pending) is limited by #define WRITEBACK_MAX_PENDING_FLUSHES 256 (although I wondered whether we should increase the limit a bit further). Even with a blocksize of 32768, that's pretty far away from exceeding INT_MAX. > Will fix the function definitions, but I'm kind of wondering exactly how > many times the inner loop in IssuePendingWritebacks() could possibly > iterate ... WRITEBACK_MAX_PENDING_FLUSHES/256, due to the limitation mentioned above. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers