Gaetano Mendola <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> It should work; dunno if anyone has tried it yet.
> I was thinking about it but I soon realized that actually is > impossible to do, postgres replay the log only if during the > start the file recover.conf is present in $DATA directory :-( So you put one in ... what's the problem? The way I'd envision this working is that 1. You set up WAL archiving on the master, and arrange to ship copies of completed segment files to the slave. 2. You take an on-line backup (ie, tar dump) on the master, and restore it on the slave. 3. You set up a recover.conf file with the restore_command being some kind of shell script that knows where to look for the shipped-over segment files, and also has a provision for being signaled to stop tracking the shipped-over segments and come alive. 4. You start the postmaster on the slave. It will try to recover. Each time it asks the restore_command script for another segment file, the script will sleep until that segment file is available, then return it. 5. When the master dies, you signal the restore_command script that it's time to come alive. It now returns "no such file" to the patiently waiting postmaster, and within seconds you have a live database on the slave. Now, as sketched you only get recovery up to the last WAL segment boundary, which might not be good enough on a low-traffic system. But you could combine this with some hack to ship over the latest partial WAL segment periodically (once a minute maybe). The restore_command script shouldn't use a partial segment --- until the alarm comes, and then it should. Somebody should hack this together and try it during beta. I don't have time myself. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match