Robert, * Robert Haas (robertmh...@gmail.com) wrote: > Meanwhile, we'll significantly help people who are currently > generating painfully large but not totally insane numbers of WAL > segments. Someone who is currently generating 32,768 WAL segments per > day - about one every 2.6 seconds - will have a significantly easier > time if they start generating 8,192 WAL segments per day - about one > every 10.5 seconds - instead. It's just much easier for a reasonably > simple archive command to keep up, "ls" doesn't have as many directory > entries to sort, etc.
I'm generally on-board with increasing the WAL segment size, and I can see the point that we might want to make it more easily configurable as it's valuable to set it differently on a small database vs. a large database, but I take exception with the notion that a "simple archive command" is ever appropriate. Heikki's excellent talk at PGCon '15 (iirc) goes over why our archive command example is about as terrible as you can get and that's primairly because it's just a simple 'cp'. archive_command needs to be doing things like fsync'ing the WAL file after it's been copied away, probably fsync'ing the directory the WAL file has been copied into, returning the correct exit code to PG, etc. Thankfully, there are backup/WAL archive utilities which do this correctly and are even built to handle a large rate of WAL files for high transaction systems (including keeping open a long-running ssh/TCP to address the startup costs of both). Switching to 64MB would still be nice to simply reduce the number of files you have to deal with, and I'm all for it for that reason, but the ssh/TCP startup cost reasons aren't good ones for the switch as people shouldn't be using a "simple" command anyway and the good tools for WAL archiving have already addressed those issues. Thanks! Stephen
Description: Digital signature