You've either not read 23.4.4 or haven't understood it. If the text is
unclear, documentation additions/changes are always welcome.
I have read this:
PostgreSQL directly supports file-based log shipping as described above.
It is also possible to implement record-based log shipping, though this
requires custom development.
But thadt is not thadt what iam looking for!
Filebased Logship backups having a big Problem for doing continous
Backups. You have to wait until
the Postmaster has written the WAL File, after this you can save it to
the Backupserver. But 1 WAL
has a size of 16 MByte ny default! (thadt is a big Datahole in your
Which is why that entire section is about copying just the changed parts
of WAL files.
It makes no sense to reduce the Filesize. If the Filesize is smaller
then 16 MBytes for WAL Files
you have still the same Problem, there are Data losses and thadt the
Center of the Problem.
But in your original email you said:
> All Users of Hugh Databases (Missioncritical and allways Online) can
> bring up its
> Databases with the same information with differences 1-5 Sec. before
> the Crash occurs!
That suggested to me that you didn't want per-transaction backup, just
one backup every second. OK, what you actually want is a continuous
backup with one copy made per transaction.
I wish to have an Solution, thadt backup my Database DB wihout
Datalosses, without locking Tables, without Shutdown
and without any User must be forced for logging out (Backup in
Production State Online without Datalosses).
So, if I understand, you want on of:
1. External RAID array. If main machine dies, turn backup machine on.
Both share the same disks.
2. Something like DRBD to copy individual disk blocks between machines.
You could do this just for WAL.
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?