On 3 January 2017 at 13:45, Amit Kapila <amit.kapil...@gmail.com> wrote: > On Tue, Jan 3, 2017 at 6:41 PM, Simon Riggs <si...@2ndquadrant.com> wrote: >> On 2 January 2017 at 21:23, Jim Nasby <jim.na...@bluetreble.com> wrote: >> >>> It's not clear from the thread that there is consensus that this feature is >>> desired. In particular, the performance aspects of changing segment size >>> from a C constant to a variable are in question. Someone with access to >>> large hardware should test that. Andres and Robert did suggest that >>> the option could be changed to a bitshift, which IMHO would also solve some >>> sanity-checking issues. >> >> Overall, Robert has made a good case. The only discussion now is about >> the knock-on effects it causes. >> >> One concern that has only barely been discussed is the effect of >> zero-ing new WAL files. That is a linear effect and will adversely >> effect performance as WAL segment size increases. >> > > Sorry, but I am not able to understand why this is a problem? The > bigger the size of WAL segment, lesser the number of files. So IIUC, > then it can only impact if zero-ing two 16MB files is cheaper than > zero-ing one 32MB file. Is that your theory or you have something > else in mind?
The issue I see is that at present no backend needs to do more than 16MB of zeroing at one time, so the impact on response time is reduced. If we start doing zeroing in larger chunks than the impact on response times will increase. So instead of regular blips we have one large blip, less often. I think the latter will be worse, but welcome measurements that show that performance is smooth and regular with large files sizes. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (firstname.lastname@example.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers