Hi all,
This evening I tracked down a configuration error that was causing a
bandwidth spike due to a misconfiguration of BackupPC (v2.1.2). I set
the IncrPeriod to 0.00 thinking that no incrementals would get run. Boy
was that wrong! Instead, it ran incrementals one after another during
off-peak
William,
Is the guest machine multi homed - with multiple network interface cards.
Linux binds a IP Address to entire OS rather than to a specific
interface unlike some of the other UNIX flavors that binds it only to the
interface. If it is really multi-homed, you might get a clue by looking at
Remote backups of a win2003 server keeps failing after a certain amount of
time (almost every time about 20h later / data 11 GB already done).
Backup method: Rsyncd
Server client: Rsync via Deltacopy (great piece of easy to configure
software thanks to someone on this list, works perfectly for
Hi, I have the same problem , but not with a Windows XP. I try to backup
the same server where i have instaled the backuppc.
My configuration is the next:
$Conf{FullKeepCnt} = [
4,
0,
6
];
$Conf{IncrKeepCnt} = 28;
$Conf{TarShareName} = [
'/etc'
];
$Conf{XferMethod} = 'tar';
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
William McKee wrote:
I use VMware on a co-lo server which has 3 guestts that all get backed
up by BackupPC. I could identify that the host was transmitting massive
amounts of data (130Gb) which appeared to be coming from one of the
three guests.
I forgot to add the windows logbook errors:
1)
The description for Event ID ( 0 ) in Source ( rsyncd ) cannot be found. The
local computer may not have the necessary registry information or message
DLL files to display messages from a remote computer. You may be able to use
the /AUXSOURCE= flag
hello,
aahhaah the world is not infinite and my resources (hardware/brain)
neither :(
Can we implement such feature
I'm thinking to put quota on my user, telling them how much data they
can backup. To do this, the backup will proceed like:
1 - first do a disk usage (DU) with the exclude
Hi all, recently I´ve installed backuppc 3.1.0 on my Centos 5.2 Server.
When I access to the backuppc management web I see Error: Wrong user:
my userid is 48, instead of 150(backuppc) where userid 48 is apache
user and 150 is backuppc.
Folowing the BackupPC documentation I´ve checked the
Hi,
Koen Linders wrote on 2009-01-07 10:17:20 - [[BackupPC-users] Remote
backups of a win2003 server keeps failing after a certain amount of time.]:
Remote backups of a win2003 server keeps failing after a certain amount of
time (almost every time about 20h later / data 11 GB already
Thanks for the reply.
I changed the value to 144000 for this client and will wait another day (or
two).
Maybe it hangs on a specific file? I hope changing the value worked.
Anyway, thx :)
Koen Linders
-Oorspronkelijk bericht-
Van: Holger Parplies [mailto:wb...@parplies.de]
Verzonden:
hello,
aahhaah the world is not infinite and my resources (hardware/brain)
neither :(
Can we implement such feature
I'm thinking to put quota on my user, telling them how much data they
can backup. To do this, the backup will proceed like:
1 - first do a disk usage (DU) with the exclude
Renke Brausse wrote:
the directory is hard coded, you can only change it at compile time.
The easiest solution is to bind mount /var/lib/backuppc to a directory
of your choice.
If you've installed with RPMs, say on CentOS or RHEL, then it's not hard
coded. Or if you've installed it with
cedric briner wrote:
hello,
aahhaah the world is not infinite and my resources (hardware/brain)
neither :(
Can we implement such feature
I'm thinking to put quota on my user, telling them how much data they
can backup. To do this, the backup will proceed like:
1 - first do a
the directory is hard coded, you can only change it at compile time.
The easiest solution is to bind mount /var/lib/backuppc to a directory
of your choice.
Isn't this set in config.pl file using $Conf{TopDir}.
good point - maybe I'm just outdated and stuck to 2.x :)
signature.asc
Les Mikesell wrote:
Matthias Meyer wrote:
I use rsyncd to backup both, windows as well as linux clients.
Is it possible to examine or calculate the progress of an actual running
backup?
No, gnu tar has a way to get an estimate of the size of an incremental
run that amanda uses to help
On Sun, Jan 04, 2009 at 01:38:19PM +0100, Bernhard Schneck wrote:
I've been using BackupPC (3.0.0) on Ubuntu (8.x) for a while
and am quite happy with it ... thanks a lot for the effort
to all BackupPC developers and contributors!
I've started to look at the Archive functions.
What I
On Wed, Dec 31, 2008 at 02:08:27AM +1000, jed wrote:
Is this app purely for backup across networks to servers, or is it also
perfectly fine for local backups of stipulated dirs?
You could set up a server and have it only backup itself - no problem.
For starters I'm just wanting to regularly
Hi Andy
Hi Cedric,
Unless I'm missing something, why wouldn't you implement
quotas on the users data before backups? Most systems have
this capability already. It'd be much simpler than trying
get users to prioritize which things they want backed up.
I'm not really sure to very well
Koen Linders wrote at about 14:05:56 + on Wednesday, January 7, 2009:
Thanks for the reply.
I changed the value to 144000 for this client and will wait another day (or
two).
Maybe it hangs on a specific file? I hope changing the value worked.
That is very likely with
$Conf{TopDir} appears to be a hardly coded variable. My understanding
is that, in your configuration file, you can set it to any value you
want, but that the result you intuitively expect will only be achieved
as long as you stay within the file-system that contains the backuppc
configuration
Juergen Harms wrote:
$Conf{TopDir} appears to be a hardly coded variable. My
understanding
is that, in your configuration file, you can set it to any value you
want, but that the result you intuitively expect will only be achieved
as long as you stay within the file-system that contains
Hi all:
I am having an issue with backup growth. I have approx 100 hosts that
should be in steady state: all have 9 full backups and 14 incrementals
which are the maximum number of retained backups.
The amount of data being backed up shouldn't be varying much, but I
have been continually losing
cedric briner wrote:
Hi Andy
Hi Cedric,
Unless I'm missing something, why wouldn't you implement
quotas on the users data before backups? Most systems have
this capability already. It'd be much simpler than trying
get users to prioritize which things they want backed up.
I'm not really
hi,
Running 3.1.0 on CentOS 5.2 with 40 clients so far mostly Linux systems.
Added my first Vista system today. Used Deltacopy to install rsync.
The backup worked but still had 13446 error transfer, mainly file name
too long.
Remote[1]: rsync: readlink_stat(All Users/Application
Hi!
Thank you for answers.
Tagore
+--
|This was sent by hirleve...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
Hi
I got a strange problem doing incrementals with tar over ssh using
--newer=$incrDate+. It seems an escape problem of part of the time
reference for the incremental. The date part of --newer is parsed
correctly but the hour part of --newer.. doesn't and is changed in
00:00:00 and tar interprets
Simone Marzona wrote at about 22:38:42 +0100 on Wednesday, January 7, 2009:
Hi
I got a strange problem doing incrementals with tar over ssh using
--newer=$incrDate+. It seems an escape problem of part of the time
reference for the incremental. The date part of --newer is parsed
Hi Mark,
Mark Maciolek wrote:
hi,
Running 3.1.0 on CentOS 5.2 with 40 clients so far mostly Linux systems.
Added my first Vista system today. Used Deltacopy to install rsync.
The backup worked but still had 13446 error transfer, mainly file name
too long.
Remote[1]: rsync:
Simone writes:
I got a strange problem doing incrementals with tar over ssh using
--newer=$incrDate+. It seems an escape problem of part of the time
reference for the incremental.
Yes, the escaping isn't happening. The $incrDate+ form means
to escape the value, so that is what you should use
Omar writes:
$Conf{TarClientCmd} = ' env LC_ALL=C /usr/bin/sudo $tarPath -c -v -f -
-C $shareName+'
. ' --totals';
$Conf{TarClientRestoreCmd} = ' env LC_ALL=C /usr/bin/sudo $tarPath -x -p
--numeric-owner --same-owner'
. ' -v -f - -C $shareName+';
Hi Andy
Hi Cedric,
Unless I'm missing something, why wouldn't you implement
quotas on the users data before backups? Most systems have
this capability already. It'd be much simpler than trying
get users to prioritize which things they want backed up.
I'm not really sure to very well
31 matches
Mail list logo