On doing the following:
su - backuppc -c /usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555 -s
ball2 \Games/MoOII\ \!/tmp/crap.tar
I get:
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'ball2/Games/MoOII'
But:
$ find /var/lib/backuppc/pc/ball/*/fball2/fGames/ |
On Tue, Dec 20, 2005 at 03:39:02AM -0800, Craig Barratt wrote:
Robin Lee Powell writes:
On doing the following:
su - backuppc -c /usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555
-s ball2 \Games/MoOII\ \!/tmp/crap.tar
I get:
/usr/share/backuppc/bin/BackupPC_tarCreate
On Tue, Dec 20, 2005 at 07:33:42AM -0600, Carl Wilhelm Soderstrom
wrote:
On 12/20 12:42 , Robin Lee Powell wrote:
su - backuppc -c /usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555
-s ball2 \Games/MoOII\ \!/tmp/crap.tar
I get:
/usr/share/backuppc/bin/BackupPC_tarCreate: bad
A feature I'd really like, and would be willing to give gifts in
return for, would be something like this:
User touches a file named .donotbackup in a directory. Backuppc
notices this and does not backup that directory. The sysadmin
doesn't have to alter the system include list.
-Robin
--
I've been using my own scripts
http://digitalkingdom.org/~rlpowell/hobbies/backups.html to remotely
mirror backuppc's date in an encrypted fashion.
The problem is, the time rsync takes seems to keep growing. I
expect this to continue more-or-less without bound, and it's already
pretty onerous.
On Wed, Feb 06, 2008 at 10:20:46AM +0100, Nils Breunese (Lemonbit)
wrote:
It is generally believed on this list (I believe) that it's not
feasible to use something as 'high-level' as rsync to replicate
BackupPC's pool. The amount of memory needed by rsync will just
explode because of all the
On Wed, Feb 06, 2008 at 11:22:10AM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
On Wed, Feb 06, 2008 at 10:20:46AM +0100, Nils Breunese
(Lemonbit) wrote:
It is generally believed on this list (I believe) that it's not
feasible to use something as 'high-level' as rsync to replicate
On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell wrote:
On Wed, Feb 06, 2008 at 11:22:10AM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
On Wed, Feb 06, 2008 at 10:20:46AM +0100, Nils Breunese
(Lemonbit) wrote:
It is generally believed on this list (I believe) that it's
On Wed, Feb 06, 2008 at 12:03:03PM -0600, Carl Wilhelm Soderstrom
wrote:
On 02/06 09:39 , Robin Lee Powell wrote:
My backuppc pool and pc directories together have 2442024
files, and 10325584 KiB of data.
If I'm reading that correctly, that's only about 10GB of data.
Yes, but lots
On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell wrote:
This reminds me: is there some fundamental reason backuppc can't
use symlinks? It would make so many things like this *so* much
easier. It such a great package otherwise; this is the only thing
that's given me cause
On Wed, Feb 06, 2008 at 02:20:09PM -0500, Paul Fox wrote:
Robin Lee Powell [EMAIL PROTECTED] wrote:
On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell
wrote:
This reminds me: is there some fundamental reason backuppc
can't use symlinks? It would make so many things like
On Wed, Feb 06, 2008 at 02:03:04PM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
This reminds me: is there some fundamental reason backuppc
can't use symlinks? It would make so many things like
this *so* much easier. It such a great package otherwise;
this is the only
On Wed, Feb 06, 2008 at 02:11:16PM -0600, Carl Wilhelm Soderstrom
wrote:
On 02/06 11:14 , Robin Lee Powell wrote:
On Wed, Feb 06, 2008 at 12:03:03PM -0600, Carl Wilhelm
Soderstrom wrote:
On 02/06 09:39 , Robin Lee Powell wrote:
My backuppc pool and pc directories together have 2442024
In the past, I've done my remote mirrors of my backuppc backups one
of two ways:
1. Run tarCreate or whatever, and create giant tarballs of the
things I've backed up. In the past, this has been totally
inappropriate for remote mirroring, because encrypting the file
would kill rsync's ability
On Wed, Feb 06, 2008 at 02:35:21PM -0600, Carl Wilhelm Soderstrom
wrote:
On 02/06 12:20 , Robin Lee Powell wrote:
Yes, but are you trying to maintain a remote sync over a DSL
line? :D
no; because I have all those files. :)
If I have an offsite backup server; I just have it do backups
On Wed, Feb 06, 2008 at 03:19:32PM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
Again: not being able to reasonbly mirror the backup system is a
Real Problem; do you have other any ideas as to how to fix it?
I do it locally with raid1 mirroring and physically rotate the
drives
On Wed, Feb 06, 2008 at 03:19:32PM -0600, Les Mikesell wrote:
Or, if you want a local copy too and don't want to burden the
target with 2 runs, just do a straight uncompressed rsync copy
locally, then let your remote backuppc run against that to save
your compressed history on an encrypted
On Thu, Feb 07, 2008 at 03:04:40PM -0500, Joe Krahn wrote:
Robin Lee Powell wrote:
Y'all have made it clear that rsync -H doesn't work too well
with backuppc archives; what about cpio? Does it do a decent
job of preserving hard links without consuming all your RAM?
-Robin
Y'all have made it clear that rsync -H doesn't work too well with
backuppc archives; what about cpio? Does it do a decent job of
preserving hard links without consuming all your RAM?
-Robin
--
Lojban Reason #17: http://en.wikipedia.org/wiki/Buffalo_buffalo
Proud Supporter of the Singularity
On Thu, Feb 07, 2008 at 08:40:29AM -0600, Carl Wilhelm Soderstrom
wrote:
On 02/06 05:38 , Les Mikesell wrote:
Instead of running backuppc locally to your source data, just
have one machine that has copy of everything.
Look into tools like rsnapshot or rdiff-backup. It sounds like the
Looking at the code (sub BackupFailCleanup in BackupPC_dump), it
would appear that partial backups are only made for fulls, not
incrementals. Is there a reason for this?
-Robin
--
They say: The first AIs will be built by the military as weapons.
And I'm thinking: Does it even occur to you
It seems to me that rsync's memory bloat issues, which have been
discussed here many times, would be basically fixed by making
File::RsyncP and backuppc itself support rsync 3.0's incremental
file transfer stuff. Is anyone working on that?
-Robin
--
They say: The first AIs will be built by
On Thu, Dec 03, 2009 at 08:13:47PM +, Tyler J. Wagner wrote:
Are you sure this isn't a ClientTimeout problem? Try increasing
it and see if the backup runs for longer.
Just as a general comment (I've been reviewing all the SIGPIPE mails
and people keep saying that), no. SIGPIPE means the
On Sun, Dec 13, 2009 at 03:46:50PM -0500, Jeffrey J. Kosowsky wrote:
Robin Lee Powell wrote at about 15:07:04 -0800 on Saturday,
December 12, 2009:
It seems to me that rsync's memory bloat issues, which have
been discussed here many times, would be basically fixed by
making File
I've only looked at the code briefly, but I believe this *should* be
possible. I don't know if I'll be implementing it, at least not
right away, but it shouldn't actually be that hard, so I wanted to
throw it out so someone else could run with it if ey wants.
It's an idea I had about rsync
On Sun, Dec 13, 2009 at 11:56:59PM -0500, Jeffrey J. Kosowsky wrote:
Unfortunately, I don't think it is that simple. If it were, then
rsync would have been written that way back in version .001. I
mean there is a reason that rsync memory usage increases as the
number of files increases (even
On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
I've only looked at the code briefly, but I believe this
*should* be possible. I don't know if I'll be implementing it,
at least not right away, but it shouldn't actually be that hard,
so I wanted
On Mon, Dec 14, 2009 at 02:17:01PM -0500, Jeffrey J. Kosowsky wrote:
Robin Lee Powell wrote at about 10:12:28 -0800 on Monday, December 14, 2009:
On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
You can, however, explicitly break the runs at top-level
directory
On Mon, Dec 14, 2009 at 02:08:31PM -0500, Jeffrey J. Kosowsky wrote:
Robin Lee Powell wrote at about 10:10:17 -0800 on Monday, December 14, 2009:
Do you actually see a *problem* with it, or are you just
assuming it won't work because it seems too easy?
The problem I see is that backuppc
On Mon, Dec 14, 2009 at 08:20:07PM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
Asking rsync, and ssh, and a pair of firewalls and load
balancers (it's complicated) to stay perfectly fine for almost a
full day is really asking a whole hell of a lot.
I don't think that should
On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
afraid I don't live there. :)
none of us do, but you're having problems. We aren't.
How many of you
On Tue, Dec 15, 2009 at 06:27:41AM -0600, Carl Wilhelm Soderstrom
wrote:
On 12/14 04:25 , Robin Lee Powell wrote:
Not with large trees it isn't. I have 3.5 million files, and
more than 300GiB of data, in one file system. The last
incremental took *twenty one hours*. I have another backup
On Tue, Dec 15, 2009 at 05:42:55PM -0800, Robin Lee Powell wrote:
Just to give a sense of scale here:
# date ; find /pictures -xdev -type f -printf %h\n /tmp/dirs ; date
Tue Dec 15 12:50:57 PST 2009
Tue Dec 15 17:26:44 PST 2009
(something I ran to try to figure out how to partition
I want to start by saying that I appreciate all the help and
suggestions y'all have given on something that's obviously not your
problem. :) Unfortunately, it looks like this problem is (1) far
more interesting than I thought and (2) might be in BackupPC itself.
On Tue, Dec 15, 2009 at
Sorry, a couple of things I forgot.
On Wed, Dec 16, 2009 at 11:15:37AM -0800, Robin Lee Powell wrote:
Anyways, the one that has the problem consistently *also* always has
it in exactly the same place; I was watching it in basically every
way possible, so here comes the debugging stuff
On last thing: Stop/Dequeue Backup did the right thing: all parts of
the backup, on both server and client, were torn down correctly.
So, clearly, there was nothing wrong with the communication between
parts as such.
-Robin
--
They say: The first AIs will be built by the military as weapons.
On Wed, Dec 16, 2009 at 03:13:23PM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
They're just not *doing* anything. Nothing has errored out; BackupPC
thinks everything is fine.
Some of the places this is happening are very small backups that
usually take a matter of minutes
On Wed, Dec 16, 2009 at 01:46:50PM -0800, Robin Lee Powell wrote:
On Wed, Dec 16, 2009 at 03:13:23PM -0600, Les Mikesell wrote:
Robin Lee Powell wrote:
They're just not *doing* anything. Nothing has errored out;
BackupPC thinks everything is fine.
Some of the places
Once again, the issue of the giant pre-3.0 rsyncs that backuppc
causes to form has come up, but this time I'm being paid to fix it,
more or less.
If I'm backing up data that I'm sure doesn't have internal hard
links (or I'm sure I don't care), is there any problem with having
backuppc simply
A customer we're backing up has a directory with ~500 subdirs and
hundreds of GiB of data. We're using BackupPC in rsync+ssh mode.
As a first pass at breaking that up, I made a bunch of seperate host
entries like /A/*0, /A/*1, ... (all the dirs have numeric names).
That seems to select the
, Robin Lee Powell
rlpow...@digitalkingdom.org wrote:
A customer we're backing up has a directory with ~500 subdirs and
hundreds of GiB of data. We're using BackupPC in rsync+ssh mode.
As a first pass at breaking that up, I made a bunch of seperate host
entries like /A/*0, /A/*1
On Tue, May 18, 2010 at 09:30:43PM +, John Rouillard wrote:
On Tue, May 18, 2010 at 02:04:46PM -0700, Robin Lee Powell wrote:
A customer we're backing up has a directory with ~500 subdirs
and hundreds of GiB of data. We're using BackupPC in rsync+ssh
mode.
As a first pass
On Wed, May 19, 2010 at 11:14:48AM -0700, Robin Lee Powell wrote:
On Tue, May 18, 2010 at 09:30:43PM +, John Rouillard wrote:
On Tue, May 18, 2010 at 02:04:46PM -0700, Robin Lee Powell
wrote:
A customer we're backing up has a directory with ~500 subdirs
and hundreds of GiB of data
On Tue, May 25, 2010 at 11:13:44AM -0700, Robin Lee Powell wrote:
On Wed, May 19, 2010 at 11:14:48AM -0700, Robin Lee Powell wrote:
On Tue, May 18, 2010 at 09:30:43PM +, John Rouillard wrote:
On Tue, May 18, 2010 at 02:04:46PM -0700, Robin Lee Powell
wrote:
A customer we're
On Tue, May 25, 2010 at 01:28:19PM -0500, Les Mikesell wrote:
On 5/25/2010 1:13 PM, Robin Lee Powell wrote:
That's a fantastic idea! I don't even need to do anything
complicated; just use lockfile /tmp/backuppc OSLT, since I
only care about not overloading single hosts
On Tue, May 25, 2010 at 01:51:30PM -0500, Les Mikesell wrote:
On 5/25/2010 1:29 PM, Robin Lee Powell wrote:
That's a fantastic idea! I don't even need to do anything
complicated; just use lockfile /tmp/backuppc OSLT, since I
only care about not overloading single hosts
On Tue, May 25, 2010 at 02:08:49PM -0500, Les Mikesell wrote:
On 5/25/2010 1:55 PM, Robin Lee Powell wrote:
I don't know, but regardless there are always at least 5 of
those, is what I'm saying.
But are they the same 5? If you simply can't transfer the
amount of data you have there's
On Tue, May 25, 2010 at 02:45:00PM -0500, Les Mikesell wrote:
On 5/25/2010 2:19 PM, Robin Lee Powell wrote:
It should take about 4 days to do all the jobs. There's one
job that is 10 days old. Therefore I conclude that it's not
doing the right thing. :)
OK, agreed, but did you
On Tue, May 25, 2010 at 03:07:13PM -0500, Les Mikesell wrote:
On 5/25/2010 2:49 PM, Robin Lee Powell wrote:
Without looking at the code, I'd guess that it would go through
the rest of the list first before retrying failed jobs - but
that's just a guess. Maybe it would help to lower
On Thu, May 27, 2010 at 04:36:46AM -0400, mox wrote:
Hello
I have a box Suse 11.1 with a service that (I don�t know why)
every day stop at the same time. I�m finding why it come to a
halt but in the meantime I would like restart it automatically.
You might want to look at various init.d
I have a fairly large (171 hosts) backup environment that seems to
be using rather more disk than it should.
GUI says: Pool is 3055.59GB comprising 7361233 files and 4369
directories (as of 9/16 01:33),
df says:
FilesystemSize Used Avail Use% Mounted on
/dev/mapper/local-backups
On Fri, Sep 17, 2010 at 07:15:49PM +0800, Chris Hoy Poy wrote:
Hi Robin,
- disk fragmentation coupled with minimum file size allocation
issues (?) ie lots of the same large file updated in place
might lead to this scenario, I think?
I guess? Don't know how I'd check that.
-
On Fri, Sep 17, 2010 at 01:48:55PM +0200, Sorin Srbu wrote:
Wasn't there a gotcha' running the nightly cleanup manually?
Was there? I'd very much like to hear about it.
What's the right way to do that?
-Robin
--
http://singinst.org/ : Our last, best hope for a fantastic future.
Lojban
We've got 3 machines with 3.2T, 4.2T, and 851G of backups, all
gathered via rsync over ssh *across the network between distant data
centers* (the backups are in a totally different location than the
servers), each server with 150+ machines to backup every day... and
it's actually working.
I
On Mon, Sep 20, 2010 at 09:56:33PM -0700, Robin Lee Powell wrote:
We've got 3 machines with 3.2T, 4.2T, and 851G of backups, all
gathered via rsync over ssh *across the network between distant
data centers* (the backups are in a totally different location
than the servers), each server
On Mon, Sep 20, 2010 at 11:08:17AM +0200, Sorin Srbu wrote:
-Original Message- From: Robin Lee Powell
[mailto:rlpow...@digitalkingdom.org] Sent: Monday, September 20,
2010 3:47 AM To: sorin.s...@orgfarm.uu.se; General list for user
discussion, questions and support Subject: How to run
There is something *very* wrong with either the tar used to make the
archive, or the tar used to restore. I wouldn't trust anything it
outputs at all.
What version of tar on both ends?
Have you tried getting a zip archive from the GUI instead? Or using
BackupPC_zipCreate on the CLI?
-Robin
On Wed, Sep 22, 2010 at 02:31:29AM -0400, Jeffrey J. Kosowsky wrote:
Honestly, I never really looked into whether it enforces the
constraints you mention above. But looking at the code, it seems
that these constraints are enforced only by the BackupPC main
routine (i.e. daeomon) itself (which
On Thu, Sep 23, 2010 at 05:52:26PM +0200, Marcus Hardt wrote:
[..]
I think it does the basic permissions that map to unix
equivalents. It doesn't preserve acls, nor does it have any way
to work around the existing ones - so you may have files that
you can read in the backups but can't
Another question: The script, running with -c, failed eventually
at this line:
my $err = $bpc-ServerConnect($Conf{ServerHost}, $Conf{ServerPort});
My serverport is set to -1; does it need to be set for this to work?
Could you not just call BackupPC_ServerMesg instead? Perhaps I
should
We add a lot of stuff automatically to our backuppc configs, and
manually going into the UI and doing the config reload is easy to
forgot. Can it be done on the command line without breaking any
backups (i.e. without restarting)?
-Robin
--
http://singinst.org/ : Our last, best hope for a
On Tue, Sep 28, 2010 at 11:16:51PM -0700, Craig Barratt wrote:
Robin writes:
We add a lot of stuff automatically to our backuppc configs, and
manually going into the UI and doing the config reload is easy
to forgot. Can it be done on the command line without breaking
any backups (i.e.
rsync --fuzzy seems to break BackupPC; fileListReceived breaks.
Could that be fixed easily?
-Robin
--
http://singinst.org/ : Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is
On Wed, Sep 29, 2010 at 03:47:25PM -0700, Craig Barratt wrote:
Robin writes:
rsync --fuzzy seems to break BackupPC; fileListReceived
breaks. Could that be fixed easily?
Sorry, but no. In 4.x I hope to implement solution that will be
better than even --fuzzy, although it might not be
On Thu, Sep 30, 2010 at 06:55:18PM +0200, Leif Gunnar Einmo wrote:
I have seeked the forum for a solution, but couldn't find it :(
I have a running BackupPc that have around 400 Gb of data 90 %
loaded. Now i'm in need to axpand the the storage with a NAS as
the server is full. I have mounted
On Fri, Oct 01, 2010 at 11:52:18AM +0200, Boniforti Flavio wrote:
Hello list.
I tried to put back on a remote server, via SSH tunnel and rsync, some
files. What I got was this:
Can we get the complete config for this host? And any global rsync
or ssh options? And the contents of the tunnel
On Mon, Oct 04, 2010 at 08:56:49AM -0400, Chris Purves wrote:
I recently copied the pool to a new hard disk following the
Copying the pool instructions from the main documentation. The
documentation says to copy the 'cpool', 'log', and 'conf'
directories using any technique and the 'pc'
OK, so, BackupPC says it's using 5590.63GB. This number is rapidly
growing. (it's all wrong; we're actually using 5848.6GB, but that's
not what this mail is about).
How do I find out which backups are using a lot of disk? We'd like
to see if there's a problem with our retention policy,
On Mon, Oct 04, 2010 at 03:25:03PM -0400, Timothy J Massey wrote:
Robin Lee Powell rlpow...@digitalkingdom.org wrote on 10/04/2010
03:15:29 PM:
How do I find out which backups are using a lot of disk? We'd
like to see if there's a problem with our retention policy,
especially
On Mon, Oct 04, 2010 at 03:32:23PM -0400, Timothy J Massey wrote:
Robin Lee Powell rlpow...@digitalkingdom.org wrote on 10/04/2010
03:28:23 PM:
On Mon, Oct 04, 2010 at 03:25:03PM -0400, Timothy J Massey
wrote:
Robin Lee Powell rlpow...@digitalkingdom.org wrote on
10/04/2010 03:15:29
On Sun, Dec 06, 2009 at 01:13:57AM +0100, Matthias Meyer wrote:
Hi,
I have a new release of the BackupPC_deleteBackup script.
Unfortunately I can't put it into the wiki
(http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=How_to_delete_backups).
Jeffrey would do that but I didn't
On Mon, Oct 04, 2010 at 03:32:23PM -0400, Timothy J Massey wrote:
Robin Lee Powell rlpow...@digitalkingdom.org wrote on 10/04/2010
03:28:23 PM:
On Mon, Oct 04, 2010 at 03:25:03PM -0400, Timothy J Massey
wrote:
Robin Lee Powell rlpow...@digitalkingdom.org wrote on
10/04/2010 03:15:29
I couldn't find anything on the wiki describing the fields of that
file, so:
If I want to know the total on-disk space used by a particular host,
do I want to add up the total column in the backups file (i.e.
field 6, counting from 1) or the total column minus the existing
column (column 8), or
Is the format written up anywhere?
-Robin
On Tue, Oct 05, 2010 at 11:36:08AM -0700, Robin Lee Powell wrote:
I couldn't find anything on the wiki describing the fields of that
file, so:
If I want to know the total on-disk space used by a particular host,
do I want to add up the total
On Thu, Oct 07, 2010 at 10:45:42AM -0700, Craig Barratt wrote:
Robin,
Is the format written up anywhere?
Yes, it's in the documentation:
http://backuppc.sourceforge.net/faq/BackupPC.html#storage_layout
Scroll down to backups.
*blink*
I *swear* I looked. -_-
Unfortunately, it
by Matthias Meyer
#note that if your $Topdir has been changed, the script will ask you
#the new location.
#
# Significant modifications by Robin Lee Powell, aka
# rlpow...@digitalkingdom.org, all of which are placed into the public domain.
#
usage=\
Usage: $0 -c client [-d backupnumber -b before data [-f
Whoops. More testing, new script. :)
-Robin
#! /bin/bash
#this script contributed by Matthias Meyer
#note that if your $Topdir has been changed, the script will ask you
#the new location.
#
# Significant modifications by Robin Lee Powell, aka
# rlpow...@digitalkingdom.org, all of which
On Sun, Oct 10, 2010 at 11:36:02AM -0400, Carl T. Miller wrote:
I found what looks like an excellent tool for managing
backuppc pools. It allows the deletion of files or
directories from archives.
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_DeleteFile
The
I have four hosts with identical configuration, as far as I know.
All of them have:
$Conf{MaxBackupPCNightlyJobs} = 8;
On one, and only one as far as I can tell, running:
sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run
results in:
$ ps -aef | grep -i nigh
backuppc 6375
On Thu, Oct 14, 2010 at 12:06:40PM -0700, Robin Lee Powell wrote:
I have four hosts with identical configuration, as far as I know.
All of them have:
$Conf{MaxBackupPCNightlyJobs} = 8;
On one, and only one as far as I can tell, running:
sudo -u backuppc BackupPC_serverMesg
On Thu, Oct 14, 2010 at 03:19:31PM -0500, Les Mikesell wrote:
On 10/14/2010 2:06 PM, Robin Lee Powell wrote:
I have four hosts with identical configuration, as far as I know.
All of them have:
$Conf{MaxBackupPCNightlyJobs} = 8;
On one, and only one as far as I can tell, running
On Thu, Oct 14, 2010 at 10:04:27PM -0700, Craig Barratt wrote:
Robin writes:
I have four hosts with identical configuration, as far as I know.
All of them have:
$Conf{MaxBackupPCNightlyJobs} = 8;
On one, and only one as far as I can tell, running:
sudo -u backuppc
On Thu, Oct 07, 2010 at 11:55:41AM -0400, Dan Pritts wrote:
I agree with your general praise, BackupPC works very well for us
in our environment, which is maybe half your size. Due to your
large size, I'll leave you with one thought:
One concern I've always had with backuppc is what would
On Fri, Oct 15, 2010 at 08:57:52AM +0100, James Wells wrote:
I find that BackupPC is good for general data retention, but bad
for bare metal restores where you have to reinstall the OS first
and then get data back on there.
Fortunately, I have no need for that. :)
-Robin
--
It happened again last night during the actual run:
2010-10-15 01:00:00 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
2010-10-15 01:00:01 Running BackupPC_nightly -m 0 31 (pid=30718)
2010-10-15 07:56:33 BackupPC_nightly now running BackupPC_sendEmail
2010-10-15 08:00:58 Finished
On Sun, Oct 24, 2010 at 09:59:16PM -0400, southasia wrote:
Finally installed Backuppc successfully. I could access web
interface inside the office. If I would like to access from home
to office Backuppc from Internet, how should do configure it?
Please advice me or if there is a link for
On Fri, Oct 15, 2010 at 12:48:07PM -0700, Robin Lee Powell wrote:
It happened again last night during the actual run:
2010-10-15 01:00:00 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
2010-10-15 01:00:01 Running BackupPC_nightly -m 0 31 (pid=30718)
2010-10-15 07:56:33
On Tue, Nov 02, 2010 at 09:39:22AM -0500, Les Mikesell wrote:
Has anyone run across 'brackup'
(http://search.cpan.org/~bradfitz/Brackup-1.10/lib/Brackup.pm)? It is
just a command line tool, not much like backuppc, but it appears to have
some very interesting concepts for the backend
On Sun, Nov 07, 2010 at 08:41:10AM -0800, auto316...@hushmail.com
wrote:
I am new to backuppc and would appreciate some pointers on how to
get started. I don't see an executable file in the distribution
-- just a zip file. And I would like to know what is the first
step that I need to do so
On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio wrote:
Hello Pavel.
for huge dirs with millions of files we got almost an order of
magnitude faster runs with the tar mode instead of rsync (which
eventually consumed all the memory anyways :) )
How would I be able to use tar
On Tue, Nov 09, 2010 at 09:37:01AM -0600, Richard Shaw wrote:
On Tue, Nov 9, 2010 at 9:27 AM, Robin Lee Powell
rlpow...@digitalkingdom.org wrote:
On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio
wrote:
Hello Pavel.
for huge dirs with millions of files we got almost an order
On Tue, Nov 09, 2010 at 05:13:58PM +0100, Boniforti Flavio wrote:
Well, by hand you'd do:
ssh host 'tar -czvf - /dir' /backups/foo.tgz
But wouldn't this create *one huge tarball*??? That's not what I'd
like to get...
It was an example for your benefit in future; it has nothing to do
Figured it out. The problem was that I have BackupPC set to run 8
nightlies at once (which usually takes 12 or more hours), but it was
ending up in a state where only one was running at a time.
This may be the longest, most detailed debugging writeup I've ever
done in 15 years of being a
On Thu, Nov 25, 2010 at 09:29:34AM +, Tyler J. Wagner wrote:
Robin,
Thank you for the awesome write-up.
On Wed, 2010-11-24 at 18:11 -0800, Robin Lee Powell wrote:
1. Don't run a non-nightly job from the CmdQueue when there are
nightly job running, *EVER*.
Unfortunately, that's
On Thu, Nov 25, 2010 at 09:29:34AM +, Tyler J. Wagner wrote:
Robin,
Thank you for the awesome write-up.
On Wed, 2010-11-24 at 18:11 -0800, Robin Lee Powell wrote:
1. Don't run a non-nightly job from the CmdQueue when there are
nightly job running, *EVER*.
Unfortunately, that's
On Sun, Nov 28, 2010 at 07:01:45PM -0800, Craig Barratt wrote:
Robin,
Thanks for the detailed analysis. I agree - it's pretty broken.
Preventing queuing a second nightly request when one is running
will at least avoid the problem. However, I don't recommend
killing the current running
On Sun, Nov 28, 2010 at 10:35:20PM -0800, Robin Lee Powell wrote:
On Sun, Nov 28, 2010 at 07:01:45PM -0800, Craig Barratt wrote:
Robin,
Thanks for the detailed analysis. I agree - it's pretty broken.
Preventing queuing a second nightly request when one is running
will at least
On Tue, Nov 30, 2010 at 03:18:46PM -0500, Timothy J Massey wrote:
Robin Lee Powell rlpow...@digitalkingdom.org wrote on 11/25/2010
01:12:50 PM:
The problem is that calling BackupPC_serverMesg
BackupPC_nightly run when the regular nightlies are already
running, or calling it twice
On Thu, Dec 02, 2010 at 01:27:17PM +0100, Oliver Freyd wrote:
Hello,
I'm a happy user of BackupPC since a few years,
running an old installation of backuppc that was created
on some version of SuSE linux, then ported over to debian lenny.
The pool is a reiserfs3 on LVM, about 300GB size,
It would be really nice to be able to tell the backuppc server to
finish all current backups without queuing any others, and then
stop/exit completely.
I know I can sort-of do this by disabling backups for each host, but
that's a really big pain, and from the reading of the queuing system
I did
1 - 100 of 156 matches
Mail list logo