Dear all,
One of my production Postgres databases is growing faster than
anticipated and I'm exploring the use of FIFO files instead of writing the
output of pg_dump to intermediate files within /tmp structure. Currently
all jobs work fine while using the intermediate files. But when I test with
the FIFO files, data is not pulled from the pipes. All files are properly
located, but ZERO bytes get backed up. The pg_dump tasks are still running
after the backup job completed successfully. Looks like the FIFO files have
not been touched by bacula-fd after the script ended and the
"readfifo=yes" option
has not been honored.
What am I missing? Is anyone using successfully the FIFO files with
postgres databases able to share some tips?
Many thanks in advance.
Cheers,
Ismael
Here is the script used in the bacula clients:
# Database Backups to temporary storage
export DUMPDIR=/tmp/bacula_pgdatadump
# make the redirections to null
exec > /dev/null
rm -f -r $DUMPDIR
mkdir $DUMPDIR
mkdir $DUMPDIR/backups
mkdir $DUMPDIR/logs
pg_dumpall -g -v -f $DUMPDIR/backups/dumpallGlobal.bkp
2>$DUMPDIR/logs/dumpallGlobal.log
for dbname in `psql -d template1 -q -t <<EOF
select datname from pg_database where not datname in ('template0') order
by datname;
EOF
`
do
mkfifo $DUMPDIR/backups/$dbname.bkp
nohup pg_dump -Fc -Z0 -v $dbname -f $DUMPDIR/backups/$dbname.bkp
2>$DUMPDIR/logs/$dbname.log &
done
# ends output redirection
exit
Here is the relevant section of the bacula-dir.conf file in the server:
...
Fileset {
Name = FullSet
Include {
File = /etc
File = /home
File = /opt
File = /root
File = /tmp/bacula_pgdatadump
File = /usr/local/sbin
File = /usr/sbin
File = /usr/share
File = /var
ExcludeDirContaining = .baculaexclude
Options {
Compression = Gzip9
Signature = Md5
readfifo=yes
}
}
Exclude {
File = /var/cache
...
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users