Hello, I'm new to linux and looking for an overall backup tool for a Linux
server.
My first question is: can backuppc make a backup of a system it is running on
itself? Or do I always need to have a second backup host to make a backup of my
Linux server?
And my second question is: could I
Hello Matthias,
I have exactly the same problem...
I found my mistake:
Parameter for backup should be:
--numeric-ids --perms --owner --group -D --links --hard-links --times
--block-size=2048 --recursive --one-file-system
and for restore it should be:
--super
--numeric-ids --perms --owner
Glassfox wrote:
My first question is: can backuppc make a backup of a system it is
running on itself? Or do I always need to have a second backup host
to make a backup of my Linux server?
A BackupPC server can backup itself just fine. Just make sure you
exclude the backup pool, or
Web applicaties are just static files, right? You can back them up
safely. Consult the docs for your database software for backups. We
let BackupPC run a script to create SQL dumps of all our databases
before starting the backup. Backing up the live database files usually
is not
Hello,
I have understood/resolved the origin of the problem !
-- under a bash in cygwin - 4294967295:513 = beuken:None
the cygwin environment was been installed *before* the creation of user
beuken under Windows environment !
then, beuken wasn't in the /etc/passwd and then , rsyncd
hello,
I have tried to use pre-xfer exec with rsyncd / cygwin
I have created a special module to restore in a unique folder (
d:/BackupPC )
$ cat /etc/rsyncd.conf
[RESTORE]
path = /cygdrive/d/BackupPC
auth users = rbackuppc
secrets file =
I've Googled for this and scanned the BackupPC docs and I don't see a
solution for this.
The issue is my desktop is spending hours creating huge backups on the
server. The current incremental backup started at 7:00 this morning
and is still running now at 12:15, and is running about 21G.
Hi,
To do this i recomend you the following:
make a copy of rsync to your backuppc bin directory
cp rsync /opt/Backuppc/bin
chown root:backuppc /opt/Backuppc/bin/rsync
chmod 750 /opt/Backuppc/bin/rsync
chmod u+s /opt/Backuppc/bin/rsync
then in the host backuppc config change the line that
Tino Schwarze wrote:
Web applicaties are just static files, right? You can back them up
safely. Consult the docs for your database software for backups. We
let BackupPC run a script to create SQL dumps of all our databases
before starting the backup. Backing up the live database
I have the following config line:
$Conf{BackupFilesExclude} = ['/proc', '/mnt', '/sys', '/home/
users', ... , '+ /vz/dump', '/vz/*'];
But there is no /vz/dump in the backups. What am I doing wrong?
Thanks in advance,
James
Glassfox wrote:
I hope, I understood this in the right way: making database dumps
helps (I will not get an inconsistence state) but using backuppc for
them will bloat the backup pool?
Depends on what your definition of bloat is. And probably also on the
size of your databases. We backup
You are listing /vz/dump in the BackupFileExclude array. Just leave it
out of BackupFilesExclude completely and it will be included.
Trey Nolen
-Original Message-
From: James Ward [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 10, 2008 3:27 PM
To:
Jean-Michel Beuken wrote:
hello,
I have tried to use pre-xfer exec with rsyncd / cygwin
I have created a special module to restore in a unique folder (
d:/BackupPC )
$ cat /etc/rsyncd.conf
[RESTORE]
path = /cygdrive/d/BackupPC
auth users =
Hi,
Mark Adams wrote on 2008-12-10 12:18:04 -0700 [[BackupPC-users] Incremental
backups aren't.]:
The issue is my desktop is spending hours creating huge backups on the
server. The current incremental backup started at 7:00 this morning
and is still running now at 12:15, and is running
Hi,
James Ward wrote on 2008-12-10 14:27:24 -0700 [[BackupPC-users] Exclude
complexity in version 2.1.2pl1]:
I have the following config line:
$Conf{BackupFilesExclude} = ['/proc', '/mnt', '/sys', '/home/
users', ... , '+ /vz/dump', '/vz/*'];
But there is no /vz/dump in the backups.
hi,
I have one client with a 292GB /data2 disk and 403GB /home disk, is
there any way to schedule this client to have the full done separately
meaning two different nights. /data2 done one night and then /home done
the next night.
Mark
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Mark Maciolek wrote:
hi,
I have one client with a 292GB /data2 disk and 403GB /home disk, is
there any way to schedule this client to have the full done separately
meaning two different nights. /data2 done one night and then /home done
the
the problem with backing up a database incrementally is that as the
database grows the backups bloat up as they do not get any benefit from the
incremental backup.
with a database, it is usually better to replicate the database to another
host and then do periodic dumps.
web pages are very
dan wrote:
the problem with backing up a database incrementally is that as the
database grows the backups bloat up as they do not get any benefit
from the incremental backup.
with a database, it is usually better to replicate the database to
another host and then do periodic dumps.
Mark writes:
Just noticed the /var/log/backuppc/LOG file for BackupPC is pumping
these out mercilessly:
2008-12-10 06:49:02 BackupPC_link got error -4 when calling
MakeFileLink(/mnt/backup/pc/shuttle
20 matches
Mail list logo