On Tue, Sep 28, 2010 at 11:16:51PM -0700, Craig Barratt wrote:
> Robin writes:
>
> > We add a lot of stuff automatically to our backuppc configs, and
> > manually going into the UI and doing the config reload is easy
> > to forgot. Can it be done on the command line without breaking
> > any backu
We add a lot of stuff automatically to our backuppc configs, and
manually going into the UI and doing the config reload is easy to
forgot. Can it be done on the command line without breaking any
backups (i.e. without restarting)?
-Robin
--
http://singinst.org/ : Our last, best hope for a fant
Another question: The script, running with "-c", failed eventually
at this line:
my $err = $bpc->ServerConnect($Conf{ServerHost}, $Conf{ServerPort});
My serverport is set to -1; does it need to be set for this to work?
Could you not just call BackupPC_ServerMesg instead? Perhaps I
should h
On Thu, Sep 23, 2010 at 05:52:26PM +0200, Marcus Hardt wrote:
> [..]
>
> > I think it does the basic permissions that map to unix
> > equivalents. It doesn't preserve acls, nor does it have any way
> > to work around the existing ones - so you may have files that
> > you can read in the backups b
On Wed, Sep 22, 2010 at 02:31:29AM -0400, Jeffrey J. Kosowsky wrote:
> Honestly, I never really looked into whether it enforces the
> constraints you mention above. But looking at the code, it seems
> that these constraints are enforced only by the BackupPC main
> routine (i.e. daeomon) itself (whi
There is something *very* wrong with either the tar used to make the
archive, or the tar used to restore. I wouldn't trust anything it
outputs at all.
What version of tar on both ends?
Have you tried getting a zip archive from the GUI instead? Or using
BackupPC_zipCreate on the CLI?
-Robin
On
On Mon, Sep 20, 2010 at 11:08:17AM +0200, Sorin Srbu wrote:
> >-Original Message- From: Robin Lee Powell
> >[mailto:rlpow...@digitalkingdom.org] Sent: Monday, September 20,
> >2010 3:47 AM To: sorin.s...@orgfarm.uu.se; General list for user
> >discussion, questions
On Mon, Sep 20, 2010 at 09:56:33PM -0700, Robin Lee Powell wrote:
>
> We've got 3 machines with 3.2T, 4.2T, and 851G of backups, all
> gathered via rsync over ssh *across the network between distant
> data centers* (the backups are in a totally different location
> than the s
We've got 3 machines with 3.2T, 4.2T, and 851G of backups, all
gathered via rsync over ssh *across the network between distant data
centers* (the backups are in a totally different location than the
servers), each server with 150+ machines to backup every day... and
it's actually working.
I wasn'
On Fri, Sep 17, 2010 at 01:48:55PM +0200, Sorin Srbu wrote:
> Wasn't there a gotcha' running the nightly cleanup manually?
Was there? I'd very much like to hear about it.
What's the right way to do that?
-Robin
--
http://singinst.org/ : Our last, best hope for a fantastic future.
Lojban (htt
On Fri, Sep 17, 2010 at 07:15:49PM +0800, Chris Hoy Poy wrote:
> Hi Robin,
>
>- disk fragmentation coupled with minimum file size allocation
>issues (?) ie lots of the same large file updated in place
>might lead to this scenario, I think?
I guess? Don't know how I'd check that.
>
I have a fairly large (171 hosts) backup environment that seems to
be using rather more disk than it should.
GUI says: Pool is 3055.59GB comprising 7361233 files and 4369
directories (as of 9/16 01:33),
df says:
FilesystemSize Used Avail Use% Mounted on
/dev/mapper/local-backups
On Thu, May 27, 2010 at 04:36:46AM -0400, mox wrote:
>
> Hello
>
> I have a box Suse 11.1 with a service that (I don�t know why)
> every day stop at the same time. I�m finding why it come to a
> halt but in the meantime I would like restart it automatically.
You might want to look at various i
On Tue, May 25, 2010 at 03:07:13PM -0500, Les Mikesell wrote:
> On 5/25/2010 2:49 PM, Robin Lee Powell wrote:
> >
> >>
> >> Without looking at the code, I'd guess that it would go through
> >> the rest of the list first before retrying failed jobs - but
On Tue, May 25, 2010 at 02:45:00PM -0500, Les Mikesell wrote:
> On 5/25/2010 2:19 PM, Robin Lee Powell wrote:
> >
> >>> It should take about 4 days to do all the jobs. There's one
> >>> job that is 10 days old. Therefore I conclude that it's not
&g
On Tue, May 25, 2010 at 02:08:49PM -0500, Les Mikesell wrote:
> On 5/25/2010 1:55 PM, Robin Lee Powell wrote:
> >
> >>> I don't know, but regardless there are always at least 5 of
> >>> those, is what I'm saying.
> >>
> >> But are they
On Tue, May 25, 2010 at 01:51:30PM -0500, Les Mikesell wrote:
> On 5/25/2010 1:29 PM, Robin Lee Powell wrote:
> >
> >>>> That's a fantastic idea! I don't even need to do anything
> >>>> complicated; just use "lockfile /tmp/backuppc"
On Tue, May 25, 2010 at 01:28:19PM -0500, Les Mikesell wrote:
> On 5/25/2010 1:13 PM, Robin Lee Powell wrote:
> >
> >>
> >> That's a fantastic idea! I don't even need to do anything
> >> complicated; just use "lockfile /tmp/backuppc" OSLT, s
On Tue, May 25, 2010 at 11:13:44AM -0700, Robin Lee Powell wrote:
> On Wed, May 19, 2010 at 11:14:48AM -0700, Robin Lee Powell wrote:
> > On Tue, May 18, 2010 at 09:30:43PM +, John Rouillard wrote:
> > > On Tue, May 18, 2010 at 02:04:46PM -0700, Robin Lee Powell
> > >
On Wed, May 19, 2010 at 11:14:48AM -0700, Robin Lee Powell wrote:
> On Tue, May 18, 2010 at 09:30:43PM +, John Rouillard wrote:
> > On Tue, May 18, 2010 at 02:04:46PM -0700, Robin Lee Powell
> > wrote:
> > > A customer we're backing up has a directory with ~500
On Tue, May 18, 2010 at 09:30:43PM +, John Rouillard wrote:
> On Tue, May 18, 2010 at 02:04:46PM -0700, Robin Lee Powell wrote:
> > A customer we're backing up has a directory with ~500 subdirs
> > and hundreds of GiB of data. We're using BackupPC in rsync+ssh
>
it to.
>
> On Tue, May 18, 2010 at 4:04 PM, Robin Lee Powell <
> rlpow...@digitalkingdom.org> wrote:
>
> >
> > A customer we're backing up has a directory with ~500 subdirs and
> > hundreds of GiB of data. We're using BackupPC in rsync+ssh m
A customer we're backing up has a directory with ~500 subdirs and
hundreds of GiB of data. We're using BackupPC in rsync+ssh mode.
As a first pass at breaking that up, I made a bunch of seperate host
entries like /A/*0, /A/*1, ... (all the dirs have numeric names).
That seems to select the righ
Once again, the issue of the giant pre-3.0 rsyncs that backuppc
causes to form has come up, but this time I'm being paid to fix it,
more or less.
If I'm backing up data that I'm sure doesn't have internal hard
links (or I'm sure I don't care), is there any problem with having
backuppc simply spaw
On Wed, Dec 16, 2009 at 01:46:50PM -0800, Robin Lee Powell wrote:
> On Wed, Dec 16, 2009 at 03:13:23PM -0600, Les Mikesell wrote:
> > Robin Lee Powell wrote:
> > >
> > > They're just not *doing* anything. Nothing has errored out;
> > > BackupPC thinks eve
On Wed, Dec 16, 2009 at 03:13:23PM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> >
> > They're just not *doing* anything. Nothing has errored out; BackupPC
> > thinks everything is fine.
> >
> > Some of the places this is happening are very small
On last thing: Stop/Dequeue Backup did the right thing: all parts of
the backup, on both server and client, were torn down correctly.
So, clearly, there was nothing wrong with the communication between
parts as such.
-Robin
--
They say: "The first AIs will be built by the military as weapons.
Sorry, a couple of things I forgot.
On Wed, Dec 16, 2009 at 11:15:37AM -0800, Robin Lee Powell wrote:
> Anyways, the one that has the problem consistently *also* always has
> it in exactly the same place; I was watching it in basically every
> way possible, so here comes the debugging s
I want to start by saying that I appreciate all the help and
suggestions y'all have given on something that's obviously not your
problem. :) Unfortunately, it looks like this problem is (1) far
more interesting than I thought and (2) might be in BackupPC itself.
On Tue, Dec 15, 2009 at 11:29:46A
On Tue, Dec 15, 2009 at 05:42:55PM -0800, Robin Lee Powell wrote:
> Just to give a sense of scale here:
>
> # date ; find /pictures -xdev -type f -printf "%h\n" >/tmp/dirs ; date
> Tue Dec 15 12:50:57 PST 2009
> Tue Dec 15 17:26:44 PST 2009
>
> (something
On Tue, Dec 15, 2009 at 06:27:41AM -0600, Carl Wilhelm Soderstrom
wrote:
> On 12/14 04:25 , Robin Lee Powell wrote:
> > Not with large trees it isn't. I have 3.5 million files, and
> > more than 300GiB of data, in one file system. The last
> > incremental took *twenty o
On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
> Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
> > Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
> > afraid I don't live there. :)
>
> none of us do, but you're havi
On Mon, Dec 14, 2009 at 08:20:07PM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> >
> > Asking rsync, and ssh, and a pair of firewalls and load
> > balancers (it's complicated) to stay perfectly fine for almost a
> > full day is really asking a whole hell o
On Mon, Dec 14, 2009 at 02:08:31PM -0500, Jeffrey J. Kosowsky wrote:
> Robin Lee Powell wrote at about 10:10:17 -0800 on Monday, December 14, 2009:
> > Do you actually see a *problem* with it, or are you just
> > assuming it won't work because it seems too easy?
>
>
On Mon, Dec 14, 2009 at 02:17:01PM -0500, Jeffrey J. Kosowsky wrote:
> Robin Lee Powell wrote at about 10:12:28 -0800 on Monday, December 14, 2009:
> > On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
> > >
> > > You can, however, explicitly
On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> > I've only looked at the code briefly, but I believe this
> > *should* be possible. I don't know if I'll be implementing it,
> > at least not right away, but it shoul
On Sun, Dec 13, 2009 at 11:56:59PM -0500, Jeffrey J. Kosowsky wrote:
> Unfortunately, I don't think it is that simple. If it were, then
> rsync would have been written that way back in version .001. I
> mean there is a reason that rsync memory usage increases as the
> number of files increases (eve
I've only looked at the code briefly, but I believe this *should* be
possible. I don't know if I'll be implementing it, at least not
right away, but it shouldn't actually be that hard, so I wanted to
throw it out so someone else could run with it if ey wants.
It's an idea I had about rsync resum
On Sun, Dec 13, 2009 at 03:46:50PM -0500, Jeffrey J. Kosowsky wrote:
> Robin Lee Powell wrote at about 15:07:04 -0800 on Saturday,
> December 12, 2009:
> >
> > It seems to me that rsync's memory bloat issues, which have
> > been discussed here many times
On Thu, Dec 03, 2009 at 08:13:47PM +, Tyler J. Wagner wrote:
> Are you sure this isn't a ClientTimeout problem? Try increasing
> it and see if the backup runs for longer.
Just as a general comment (I've been reviewing all the SIGPIPE mails
and people keep saying that), no. SIGPIPE means the
It seems to me that rsync's memory bloat issues, which have been
discussed here many times, would be basically fixed by making
File::RsyncP and backuppc itself support rsync 3.0's incremental
file transfer stuff. Is anyone working on that?
-Robin
--
They say: "The first AIs will be built by t
Looking at the code ("sub BackupFailCleanup" in BackupPC_dump), it
would appear that partial backups are only made for fulls, not
incrementals. Is there a reason for this?
-Robin
--
They say: "The first AIs will be built by the military as weapons."
And I'm thinking: "Does it even occur to
On Thu, Feb 07, 2008 at 03:04:40PM -0500, Joe Krahn wrote:
> Robin Lee Powell wrote:
> > Y'all have made it clear that rsync -H doesn't work too well
> > with backuppc archives; what about cpio? Does it do a decent
> > job of preserving hard links without consumin
Y'all have made it clear that rsync -H doesn't work too well with
backuppc archives; what about cpio? Does it do a decent job of
preserving hard links without consuming all your RAM?
-Robin
--
Lojban Reason #17: http://en.wikipedia.org/wiki/Buffalo_buffalo
Proud Supporter of the Singularity In
On Thu, Feb 07, 2008 at 08:40:29AM -0600, Carl Wilhelm Soderstrom
wrote:
> On 02/06 05:38 , Les Mikesell wrote:
> > Instead of running backuppc locally to your source data, just
> > have one machine that has copy of everything.
>
> Look into tools like rsnapshot or rdiff-backup. It sounds like t
On Wed, Feb 06, 2008 at 03:19:32PM -0600, Les Mikesell wrote:
> Or, if you want a local copy too and don't want to burden the
> target with 2 runs, just do a straight uncompressed rsync copy
> locally, then let your remote backuppc run against that to save
> your compressed history on an encrypted
On Wed, Feb 06, 2008 at 03:19:32PM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> >
> > Again: not being able to reasonbly mirror the backup system is a
> > Real Problem; do you have other any ideas as to how to fix it?
>
> I do it locally with raid1 mirrorin
On Wed, Feb 06, 2008 at 02:35:21PM -0600, Carl Wilhelm Soderstrom
wrote:
> On 02/06 12:20 , Robin Lee Powell wrote:
> > Yes, but are you trying to maintain a remote sync over a DSL
> > line? :D
>
> no; because I have all those files. :)
>
> If I have an offsite backu
In the past, I've done my remote mirrors of my backuppc backups one
of two ways:
1. Run tarCreate or whatever, and create giant tarballs of the
things I've backed up. In the past, this has been totally
inappropriate for remote mirroring, because encrypting the file
would kill rsync's ability to
On Wed, Feb 06, 2008 at 02:11:16PM -0600, Carl Wilhelm Soderstrom
wrote:
> On 02/06 11:14 , Robin Lee Powell wrote:
> > On Wed, Feb 06, 2008 at 12:03:03PM -0600, Carl Wilhelm
> > Soderstrom wrote:
> > > On 02/06 09:39 , Robin Lee Powell wrote:
> > > > > My ba
On Wed, Feb 06, 2008 at 02:03:04PM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
>
> >> > > This reminds me: is there some fundamental reason backuppc
> >> > > can't use symlinks? It would make so many things like
> >> > >
On Wed, Feb 06, 2008 at 02:20:09PM -0500, Paul Fox wrote:
> Robin Lee Powell <[EMAIL PROTECTED]> wrote:
> > On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell
> > wrote:
> > > This reminds me: is there some fundamental reason backuppc
> > > can
On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell wrote:
> This reminds me: is there some fundamental reason backuppc can't
> use symlinks? It would make so many things like this *so* much
> easier. It such a great package otherwise; this is the only thing
> that'
On Wed, Feb 06, 2008 at 12:03:03PM -0600, Carl Wilhelm Soderstrom
wrote:
> On 02/06 09:39 , Robin Lee Powell wrote:
> > > My backuppc pool and pc directories together have 2442024
> > > files, and 10325584 KiB of data.
>
> If I'm reading that correctly, that'
On Wed, Feb 06, 2008 at 09:33:47AM -0800, Robin Lee Powell wrote:
> On Wed, Feb 06, 2008 at 11:22:10AM -0600, Les Mikesell wrote:
> > Robin Lee Powell wrote:
> > > On Wed, Feb 06, 2008 at 10:20:46AM +0100, Nils Breunese
> > > (Lemonbit) wrote:
> > >> I
On Wed, Feb 06, 2008 at 11:22:10AM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> > On Wed, Feb 06, 2008 at 10:20:46AM +0100, Nils Breunese
> > (Lemonbit) wrote:
> >> It is generally believed on this list (I believe) that it's not
> >> feasible to use
On Wed, Feb 06, 2008 at 10:20:46AM +0100, Nils Breunese (Lemonbit)
wrote:
> It is generally believed on this list (I believe) that it's not
> feasible to use something as 'high-level' as rsync to replicate
> BackupPC's pool. The amount of memory needed by rsync will just
> explode because of all th
I've been using my own scripts
http://digitalkingdom.org/~rlpowell/hobbies/backups.html to remotely
mirror backuppc's date in an encrypted fashion.
The problem is, the time rsync takes seems to keep growing. I
expect this to continue more-or-less without bound, and it's already
pretty onerous.
A feature I'd really like, and would be willing to give gifts in
return for, would be something like this:
User touches a file named ".donotbackup" in a directory. Backuppc
notices this and does not backup that directory. The sysadmin
doesn't have to alter the system include list.
-Robin
--
On Tue, Dec 20, 2005 at 07:33:42AM -0600, Carl Wilhelm Soderstrom
wrote:
> On 12/20 12:42 , Robin Lee Powell wrote:
> > su - backuppc -c "/usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555
> > -s ball2 \"Games/MoOII\"" >\!/tmp/crap.tar
> >
On Tue, Dec 20, 2005 at 03:39:02AM -0800, Craig Barratt wrote:
> Robin Lee Powell writes:
>
> > On doing the following:
> >
> > su - backuppc -c "/usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555
> > -s ball2 \"Games/MoOII\"" >\!/tmp/
On doing the following:
su - backuppc -c "/usr/share/backuppc/bin/BackupPC_tarCreate -h ball -n 555 -s
ball2 \"Games/MoOII\"" >\!/tmp/crap.tar
I get:
/usr/share/backuppc/bin/BackupPC_tarCreate: bad share or directory
'ball2/Games/MoOII'
But:
$ find /var/lib/backuppc/pc/ball/*/fball2/fGames/
101 - 162 of 162 matches
Mail list logo