On Wed, Dec 16, 2009 at 03:13:23PM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> >
> > They're just not *doing* anything. Nothing has errored out; BackupPC
> > thinks everything is fine.
> >
> > Some of the places this is happening are very small backups that
> > usually take a matter o
Robin Lee Powell wrote:
>
> They're just not *doing* anything. Nothing has errored out; BackupPC
> thinks everything is fine.
>
> Some of the places this is happening are very small backups that
> usually take a matter of minutes.
>
> Suddenly this isn't looking like a networking problem anymor
On last thing: Stop/Dequeue Backup did the right thing: all parts of
the backup, on both server and client, were torn down correctly.
So, clearly, there was nothing wrong with the communication between
parts as such.
-Robin
--
They say: "The first AIs will be built by the military as weapons.
Sorry, a couple of things I forgot.
On Wed, Dec 16, 2009 at 11:15:37AM -0800, Robin Lee Powell wrote:
> Anyways, the one that has the problem consistently *also* always has
> it in exactly the same place; I was watching it in basically every
> way possible, so here comes the debugging stuff. As w
I want to start by saying that I appreciate all the help and
suggestions y'all have given on something that's obviously not your
problem. :) Unfortunately, it looks like this problem is (1) far
more interesting than I thought and (2) might be in BackupPC itself.
On Tue, Dec 15, 2009 at 11:29:46A
Robin Lee Powell schrieb:
> RedHat GFS *really* doesn't like directories with large numbers of
> files. It's not a big fan of stat() calls, either.
Well, a network Cluster Filesystem is no fun to backup and might very
well be the bottleneck.
Ralf
---
Robin Lee Powell wrote:
> On Tue, Dec 15, 2009 at 05:42:55PM -0800, Robin Lee Powell wrote:
>> Just to give a sense of scale here:
>>
>> # date ; find /pictures -xdev -type f -printf "%h\n" >/tmp/dirs ; date
>> Tue Dec 15 12:50:57 PST 2009
>> Tue Dec 15 17:26:44 PST 2009
>>
>> (something I ran to t
On 12/16 06:38 , Carl Wilhelm Soderstrom wrote:
> On 12/15 05:42 , Robin Lee Powell wrote:
> > The other one isn't even close to finishing, as far as I can tell.
> > In the face of it taking nigh-on 5 hours just to *walk the tree*,
> > from the local host, I haven't been focusing on little things l
On 12/15 05:42 , Robin Lee Powell wrote:
> The other one isn't even close to finishing, as far as I can tell.
> In the face of it taking nigh-on 5 hours just to *walk the tree*,
> from the local host, I haven't been focusing on little things like
> ssh encryption choices too much. :)
So you're co
On Tue, Dec 15, 2009 at 05:42:55PM -0800, Robin Lee Powell wrote:
> Just to give a sense of scale here:
>
> # date ; find /pictures -xdev -type f -printf "%h\n" >/tmp/dirs ; date
> Tue Dec 15 12:50:57 PST 2009
> Tue Dec 15 17:26:44 PST 2009
>
> (something I ran to try to figure out how to partiti
On Tue, Dec 15, 2009 at 06:27:41AM -0600, Carl Wilhelm Soderstrom
wrote:
> On 12/14 04:25 , Robin Lee Powell wrote:
> > Not with large trees it isn't. I have 3.5 million files, and
> > more than 300GiB of data, in one file system. The last
> > incremental took *twenty one hours*. I have another
Robin Lee Powell schrieb:
> On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
> > Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
> > > Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
> > > afraid I don't live there. :)
> >
> > none of us do, but you're having pr
Robin Lee Powell wrote:
> On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
>
>> Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
>>
>>> Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
>>> afraid I don't live there. :)
>>>
>> none of us do, but you'r
Robin Lee Powell wrote:
> On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
>> Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
>>> Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
>>> afraid I don't live there. :)
>> none of us do, but you're having problems. We ar
On Tue, Dec 15, 2009 at 02:33:06PM +0100, Holger Parplies wrote:
> Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800:
> > Oh, I agree; in an ideal world, it wouldn't be an issue. I'm
> > afraid I don't live there. :)
>
> none of us do, but you're having problems. We aren't.
How many of you a
Hi,
Robin Lee Powell wrote on 2009-12-15 00:22:41 -0800 [Re: [BackupPC-users] An
idea to fix both SIGPIPE and memory issues with?rsync]:
> On Mon, Dec 14, 2009 at 08:20:07PM -0600, Les Mikesell wrote:
> > Robin Lee Powell wrote:
> > >
> > > Asking rsync, and ssh,
On 12/14 04:25 , Robin Lee Powell wrote:
> Not with large trees it isn't. I have 3.5 million files, and more
> than 300GiB of data, in one file system. The last incremental took
> *twenty one hours*. I have another backup that's 4.5 million files,
> also more than 300 GiB of data, also in one fi
On Mon, Dec 14, 2009 at 08:20:07PM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> >
> > Asking rsync, and ssh, and a pair of firewalls and load
> > balancers (it's complicated) to stay perfectly fine for almost a
> > full day is really asking a whole hell of a lot.
>
> I don't think that
Robin Lee Powell wrote:
>
> Asking rsync, and ssh, and a pair of firewalls and load balancers
> (it's complicated) to stay perfectly fine for almost a full day is
> really asking a whole hell of a lot.
I don't think that should be true. There's no reason for a program to quit
just
because it h
Robin Lee Powell wrote at about 16:28:43 -0800 on Monday, December 14, 2009:
> On Mon, Dec 14, 2009 at 02:17:01PM -0500, Jeffrey J. Kosowsky wrote:
> > Robin Lee Powell wrote at about 10:12:28 -0800 on Monday, December 14,
> > 2009:
> > > On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell
On Mon, Dec 14, 2009 at 02:08:31PM -0500, Jeffrey J. Kosowsky wrote:
> Robin Lee Powell wrote at about 10:10:17 -0800 on Monday, December 14, 2009:
> > Do you actually see a *problem* with it, or are you just
> > assuming it won't work because it seems too easy?
>
> The problem I see is that bac
On Mon, Dec 14, 2009 at 02:17:01PM -0500, Jeffrey J. Kosowsky wrote:
> Robin Lee Powell wrote at about 10:12:28 -0800 on Monday, December 14, 2009:
> > On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
> > >
> > > You can, however, explicitly break the runs at top-level
> > > direct
Shawn Perry wrote at about 23:42:33 -0700 on Sunday, December 13, 2009:
> You can always run come sort of disk de-duplicater after you copy without -H
How does the disk de-duplicator know which duplications are
intentional vs. which ones are not?
Plus a de-duplicator will have similar memory sca
Robin Lee Powell wrote at about 10:12:28 -0800 on Monday, December 14, 2009:
> On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
> > Robin Lee Powell wrote:
> > > I've only looked at the code briefly, but I believe this
> > > *should* be possible. I don't know if I'll be implementi
Robin Lee Powell wrote at about 10:10:17 -0800 on Monday, December 14, 2009:
> Do you actually see a *problem* with it, or are you just assuming it
> won't work because it seems too easy?
The problem I see is that backuppc won't be able to backup hard links
on any interrupted or sub-divided back
On Mon, Dec 14, 2009 at 07:57:10AM -0600, Les Mikesell wrote:
> Robin Lee Powell wrote:
> > I've only looked at the code briefly, but I believe this
> > *should* be possible. I don't know if I'll be implementing it,
> > at least not right away, but it shouldn't actually be that hard,
> > so I want
On Sun, Dec 13, 2009 at 11:56:59PM -0500, Jeffrey J. Kosowsky wrote:
> Unfortunately, I don't think it is that simple. If it were, then
> rsync would have been written that way back in version .001. I
> mean there is a reason that rsync memory usage increases as the
> number of files increases (eve
Robin Lee Powell wrote:
> I've only looked at the code briefly, but I believe this *should* be
> possible. I don't know if I'll be implementing it, at least not
> right away, but it shouldn't actually be that hard, so I wanted to
> throw it out so someone else could run with it if ey wants.
>
> I
You can always run come sort of disk de-duplicater after you copy without -H
On Sun, Dec 13, 2009 at 9:56 PM, Jeffrey J. Kosowsky
wrote:
> Robin Lee Powell wrote at about 20:18:55 -0800 on Sunday, December 13, 2009:
> >
> > I've only looked at the code briefly, but I believe this *should* be
>
Robin Lee Powell wrote at about 20:18:55 -0800 on Sunday, December 13, 2009:
>
> I've only looked at the code briefly, but I believe this *should* be
> possible. I don't know if I'll be implementing it, at least not
> right away, but it shouldn't actually be that hard, so I wanted to
> throw
I've only looked at the code briefly, but I believe this *should* be
possible. I don't know if I'll be implementing it, at least not
right away, but it shouldn't actually be that hard, so I wanted to
throw it out so someone else could run with it if ey wants.
It's an idea I had about rsync resum
31 matches
Mail list logo