Craig Barratt schrieb:
Ralf Gross writes:
I schedule backups exclusively with cron with this option and
crontab
entries.
$Conf{FullPeriod} = -1;
5 20 * * 5 /usr/share/backuppc/bin/BackupPC_serverMesg backup zorg
zorg
root 1 /dev/null 21
5 20 * * 1-4 /usr/share/backuppc/bin
Craig Barratt schrieb:
Ralf Gross writes:
Craig Barratt schrieb:
Ralf Gross writes:
Is there any side effect on setting $Conf{IncrPeriod} to a very high
value?
My first reaction was yes, but the answer is actually no. The normal
scheduler makes sure a full backup has
Hi,
today I noticed a very high load (15) on one backuppc server. The reason
was the BackupPC_tarExtract process. As a result of the hight load other
applications running on this host had problems and timed out. This is not
our main backuppc host, it is mainly used as our monitoring system
Hi,
since last weekend I have a problem with the rsync backup of one of our
server. I'm not sure if this is related to an update of the Solaris rsyncd
(sunfreeware 2.6.6-2.6.8) last Friday.
BackupPC: Debian GNU/Linux 3.1 (sarge/stable), rsync version 2.6.4
protocol version 29, backuppc
Ralf Gross said:
since last weekend I have a problem with the rsync backup of one of our
server. I'm not sure if this is related to an update of the Solaris rsyncd
(sunfreeware 2.6.6-2.6.8) last Friday.
This is definitely a problem with the 2.6.8 (sunfreeware) rsync. I
switched back to 2.6.6
Ralf Gross said:
since last weekend I have a problem with the rsync backup of one of our
server. I'm not sure if this is related to an update of the Solaris
rsyncd
(sunfreeware 2.6.6-2.6.8) last Friday.
This is definitely a problem with the 2.6.8 (sunfreeware) rsync. I
switched back
Hi,
I was looking at the backuppc home page and the sourceforge project page
for the bug tracker that is backuppc is using. I couldn't find any info on
how to file a bug, what is the recommended way to do this?
Ralf
___
BackupPC-users mailing list
don Paolo Benvenuto said:
I installed backuppc on a ubuntu server, and I back up all the othere
ubuntu pc of my lan.
I have a problem backing up from one of the ubuntu clients, precisely
from a edubuntu pc. The edubuntu developers ensure me that edubuntu is
identical to ubuntu in all that
Casper Thomsen said:
Maybe this is a feature request, and maybe it is just a show off how dumb
I am---let's see.
What would be really great to have is the possibility to ensure that I
have the last n revisions of files; no matter how many fulls or
incrementals. I guess this is not the main
Hi,
I successfully tested a patched rsync version that supports ACL's. Because
backuppc uses File-RsyncP and not rsync directly, it doesn't profit from
that.
How is the state of File-RsyncP's ACL support?
Ralf
-
Using
Eric Snyder said:
I just installed BackupPC 3.0 on a new installation on Debian. I had a
previous installation and have a SCSI drive with backup data from the
old install so I am hoping to get reconfigured to use those backups. The
problem currently is that I am getting an Authorization
Eric Snyder said:
OK. I have done this. For now I have commented out the
security/authorization section. When I request http://debian/backuppc/ I
get a file not found.
Did you create the symlink index.cgi - BackupPC_Admin?
When I request http://debian/backuppc/BackupPC_Admin I get the
Eric Snyder said:
No, my BackupPC_Admin is already in my cgi-bin directory. Do both the
symlink and the BackupPC_Admin need to be in the cgi-bin directory?
If you want to use http://debian/backuppc/ as your BackupPC link you need
a index file that apache knows (index.cgi).
Thanks for the
Craig Barratt schrieb:
Bradley writes:
Running into a problem with my larger machines doing backups. 90% of
the time, the backup ends with the following message:
backup failed (Tar exited with error 256 () status)
I believe I read somewhere that it was due to a file changing during
Craig Barratt schrieb:
What version of tar are you using? Torsten reported that the
newest version has changed the exit status in the case of
certain relatively benign warnings that are therefore considerd
fatal by BackupPC.
Could you also look through the XferLOG file and confirm that
it
Holger Parplies schrieb:
Craig Barratt schrieb:
What version of tar are you using? Torsten reported that the
newest version has changed the exit status in the case of
certain relatively benign warnings that are therefore considerd
fatal by BackupPC.
[...]
Is there a
Les Mikesell said:
I've used **3** different computers with wildly different hardware. On
the host side, I've used **4** different computers (and most of them are
high-end server hardware) with wildly different hardware. It's not
related to a specific brand or type of hardware.
But lots of
Arch Willingham schrieb:
Beats me, I don't even know what it does. Its the way it is set in
the default config.pl file. I just copied the default.pl file to
machine2.pl file and ran with it.
Is there something I should change?
No, the -x is okay. I didn't notice that it's an lowercase -x,
Arch Willingham schrieb:
Wooohh...I hate to be a dummy but that's the sound of
this all going way over my head :) !!! If the -x is ok, what do I
need to change to have BackupPC backup itself?
I've no idea. -x disables X11 forwarding, thus I don't know why it's
complaining abut
Bruno Sampayo schrieb:
I tried to make backup with backuppc on debian sarge, and the
server client is a Debian etch kernel: 2.6.8-2-386.
When I start the full backuppc for this machine, I got a error with
the follow message:
[EMAIL PROTECTED]:~$ /usr/bin/perl
Arch Willingham schrieb:
The backuppc server is machine2. I.E. it is trying to backup itself.
I ran that first command you gave me and it gave a weird message
The authenticity of machine2 can't be established. RSA key
fingerprint is blah, blah, blah. Are you sure you want to continue
Bruno Sampayo schrieb:
I did the tar reinstall with version 1.4, but I'm still have the
problem, the lastest status that I got from the backuppc is:
Contents of file /var/lib/backuppc/pc/portal/XferLOG.0.z, modified
2007-02-28 13:35:58 (Extracting only Errors)
Ok, but you are
Daniel Haas schrieb:
But now I have the problem, that we need a grained right-management
on our samba-server. So I have to implement ACLs.
As I read in the list ACLs are not supported by backuppc. But I
read that star is workingh with ACLs and the rsync command normally
work with ACls,
Hi,
at the momente I'm taking backups of linux clients only. We have a
couple of Windows server that use ghost (or a similar software) to
dump an backup image to a linux share which then gets backed up by
backuppc.
This is working fine, but is a waste of space. We keep 2 images (2
weeks) which
Hi,
I want to upgrade the backuppc data space of one of my backuppc
server. /var/lib/backuppc (reiserfs) is at the moment a plain lvm
(1TB, 4x250GB, 740GB used) and I want to update to raid5/lvm (1,5TB,
4x500GB).
I did upgrade an other server which had no lvm volume a feew weeks
ago. This was
Adam Goryachev schrieb:
Ralf Gross wrote:
Hi,
I want to upgrade the backuppc data space of one of my backuppc
server. /var/lib/backuppc (reiserfs) is at the moment a plain lvm
(1TB, 4x250GB, 740GB used) and I want to update to raid5/lvm (1,5TB,
4x500GB).
I did upgrade an other
Holger Parplies schrieb:
Hi,
Adam Goryachev wrote on 15.06.2007 at 11:28:13 [Re: [BackupPC-users] How to
move backuppc data from lvm to bigger disks/lvm?]:
Ralf Gross wrote:
[...]
dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump
[...]
I think the dd data includes
CORNU Frédéric schrieb:
As stated in the README.Debian, I tried to move my DATADIR to
a remote location : A Windows machine wich sees its content
backed up daily by corporate backup system. I created a
backuppc folder on the samba share and made a symbolic link in
Stefan Degen schrieb:
is ist possible to start backuppc_nightly by hand?
As user backuppc:
/usr/share/backuppc/bin/BackupPC_nightly 0 255
Ralf
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the
Peter Carlsson schrieb:
I know there are identical files on the three hosts although they are
not located in identical directory tree. How do I configure BackupPC
to find these repeated/identical files?
BackupPC finds these files automaticially during backup. Only one copy
exist in the pool.
Krsnendu dasa schrieb:
I have a dedicated disk for BackupPC it is using LVM. Can I use dd to
clone this to a newer harddrive?
I've done this a few weeks ago and it worked.
Ralf
-
This SF.net email is sponsored by DB2
Hi,
I had to change the disks in our backuppc server to expand /var/lib/backuppc.
So I umounted it, ran fsck.reiserfs, copied the partition with dd and netcat to
an other system as file, added disks, copied the disk image back with
dd/nc, ran resize_reiserfs, fsck.reiserfs. Everything was ok, no
Ralf Gross schrieb:
I had to change the disks in our backuppc server to expand /var/lib/backuppc.
So I umounted it, ran fsck.reiserfs, copied the partition with dd and netcat
to
an other system as file, added disks, copied the disk image back with
dd/nc, ran resize_reiserfs, fsck.reiserfs
Nils Breunese (Lemonbit) schrieb:
Arch Willingham wrote:
I have been looking at (and installed) both packages. I have tried
to find a comparison of the advantages and disadvantages of each as
compared to the other but found nothing very informative. Any ideas-
thoughts from anyone
Hi,
there is a regular discussion on how to backup/move/copy the backuppc
pool. Did anyone try to backup the pool with bacula?
I need to expand the raid volume where the pool is stored (Arecac RAID
controller). Doing this without backup is a bit frightening (I didn't
use LVM for the filesystem).
Hi,
I use BackupPC since many years without hassle. But something seems to
be broken now.
BackupPC 3.1 (source)
Debian Etch
xfs fs
Recently the pool was running full and I added some additional disks to the
raid volume. backuppc showed the pool size before as 0.00GB, but I didn't
realize that
Bernhard Ott schrieb:
I use BackupPC since many years without hassle. But something seems to
be broken now.
BackupPC 3.1 (source)
Debian Etch
xfs fs
Hi Ralf,
look for the thread no cpool info shown on web interface (2008-04)in
the archives, Tino Schwarze found a solution for
Craig Barratt schrieb:
Ralf writes:
thanks, this seems to solve the problem:
Sounds like you have the IO::Dirent + xfs problem. It's fixed
in 3.2.0 beta0.
Hm, BackupPC_Nightly is working again. But the status page still shows
0.00GB as pool size (after applying Tino's patch).
# Pool is
Ralf Gross schrieb:
Craig Barratt schrieb:
Ralf writes:
thanks, this seems to solve the problem:
Sounds like you have the IO::Dirent + xfs problem. It's fixed
in 3.2.0 beta0.
Hm, BackupPC_Nightly is working again. But the status page still shows
0.00GB as pool size (after
Jim Leonard schrieb:
Tino Schwarze wrote:
I'm using bacula to backup the generated tar files and have them deleted
afterwards.
This is off-topic, I apologize, but if you are using Bacula, then why do
you have a BackupPC installation?
I also use bacula and backuppc to backup some TB of
Hi,
I'm using backuppc and bacula together for a long time. The amount of
data to backup is growing massively lately (mostly large video files).
At the moment I'm using backup 2 tape for the large raid arrays. Next
year I may have to backup 300-400 TB. Backuppc is used for a small
amount of data,
Robin Lee Powell schrieb:
On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
afraid I don't live there. :)
none of us do, but you're having problems. We
Robin Lee Powell schrieb:
RedHat GFS *really* doesn't like directories with large numbers of
files. It's not a big fan of stat() calls, either.
Well, a network Cluster Filesystem is no fun to backup and might very
well be the bottleneck.
Ralf
Hi,
I'm faced with the growing storage demands in my department. In the
near future we will need several hundred TB. Mostly large files. ATM
we already have 80 TB of data with gets backed up to tape.
Providing the primary storage is not the big problem. My biggest
concern is the backup of the
Chris Robertson schrieb:
Ralf Gross wrote:
Hi,
I'm faced with the growing storage demands in my department. In the
near future we will need several hundred TB. Mostly large files. ATM
we already have 80 TB of data with gets backed up to tape.
Providing the primary storage
Gerald Brandt schrieb:
I think I've to look for a different solution, I just can't imagine a
pool with 10 TB.
* I have recently taken my DRBD mirror off-line and copied the BackupPC
directory structure to both XFS-without-DRBD and an EXT4 file system for
testing.
Les Mikesell schrieb:
Ralf Gross wrote:
I think I've to look for a different solution, I just can't imagine a
pool with 10 TB.
Backuppc's usual scaling issues are with the number of files/links more than
total size, so the problems may be different when you work with huge files.
I
Les Mikesell schrieb:
On 2/19/2010 9:42 AM, Ralf Gross wrote:
Les Mikesell schrieb:
Ralf Gross wrote:
I think I've to look for a different solution, I just can't imagine a
pool with 10 TB.
Backuppc's usual scaling issues are with the number of files/links more
than
total size
Timothy J Massey schrieb:
Ralf Gross ralf-li...@ralfgross.de wrote on 02/19/2010 10:42:35 AM:
bit off topic:
Right now I'm looking for a cheap storage solution that is based
on supermicro chassis with 36 drive bays (server) or 45 drive bays
(expansion unit) in 4 HU. Frightening
dan schrieb:
you would need to move up to 15K rpm drives to have a very large array and
the cost will grow exponentially trying to get such a large array.
as Les said, look at a zfs array with block level dedup. I have a 3TB setup
right now and I have some been running a backup against a
Chris Robertson schrieb:
Chris Robertson wrote:
Ralf Gross wrote:
Gerald Brandt schrieb:
You may want to look at this thread
http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg17234.html
I've seen this thread, but the pool sizes
Hi,
I'm using BackupPC without major problems since a few years now. Our
main fileserver has now reached 3,3 TB and it takes 2 days (18 MB/s)
to do a full backup with tar method.
I'd like to find out if there is something I can do to speed up the full
backups without changing the hardware.
The
Pedro M. S. Oliveira schrieb:
Have you tried the rsync method, it should be way faster than tar.
I think rsync is most useful with servers that have a slow network
connection. But the network speed is not the problem, more precisely I
don't exactly know what the real bottlenck is.
Ralf
Tyler J. Wagner schrieb:
On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com]
Sent: Wednesday, May 26, 2010 2:55 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users]
Ralf Gross schrieb:
Ralf Gross schrieb:
Tyler J. Wagner schrieb:
On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com]
Sent: Wednesday, May 26, 2010 2:55 PM
To: General list for user discussion
Ralf Gross schrieb:
write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
select(1, [0], [], NULL, {60, 0
Les Mikesell schrieb:
On 5/26/2010 3:41 PM, Ralf Gross wrote:
Ralf Gross schrieb:
write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
select(1, [0], [], NULL
Ralf Gross schrieb:
Les Mikesell schrieb:
On 5/26/2010 3:41 PM, Ralf Gross wrote:
Ralf Gross schrieb:
write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
select(1, [0], [], NULL, {60, 0}) = 0 (Timeout
Les Mikesell schrieb:
Ralf Gross wrote:
Ok, the first rsync full backup (488) completed. It took 500min. longer than
the last tar full backup (482).
Backup TypeFilled Level Start Date Duration/mins Age/days
482 fullyes 0 5/19 02:05 3223.2
Les Mikesell schrieb:
Ralf Gross wrote:
the RsyncP man page tells me this:
http://search.cpan.org/~cbarratt/File-RsyncP-0.68/lib/File/RsyncP.pm
File::RsyncP does not compute file deltas (ie: it behaves as though
--whole-file is specified) or implement exclude or include options
Hi,
I switched from tar to rsync a few weeks ago. Now I hat to restore the
first file from an older backup (before the tar - rsync switch).
I get the following messages in the xfer log:
Running: /usr/bin/ssh -c blowfish -q -x -l root vu0em003-1
/usr/bin/rsync --server --numeric-ids --perms
Holger Parplies schrieb:
Hi,
Carl Wilhelm Soderstrom wrote on 2011-05-26 06:05:48 -0500 [Re:
[BackupPC-users] Best FS for BackupPC]:
On 05/26 12:20 , Adam Goryachev wrote:
BTW, specifically related to backuppc, many years ago, reiserfsck was
perfect as it doesn't have any concept or
SSzretter schrieb:
It's set to :
$Conf{BackupFilesOnly} = {};
My file count / reuse summary is basically not changed (which I would expect
some numbers to drop) in the web admin for the machine's latest backup.
In the latest xfer log for this morning, just a SMALL sampling:
create d
63 matches
Mail list logo