Wow!
I was just going to write about this. I have just turned on gnutar
backups. I noticed that the estimates are basically tar operations with
all of the ouput going to /dev/null. That is what's taking the time.
Not to get the file's size, but to move the date through that system
pipe into /dev/null.
Might a ball-park reliable estimate of filesystem size be aquired with
using find with the ls option? a shell script could interpret the
excludes file into what find needs (-prune), capture the size field with
cut, or awk then add them with eval for a sum total?
I think I would be willing to help with this, if it's a viable thing and
all...
I expect there would be idiosynchrosies (SP?) between the estimates of
tar and the estimates of find, but find would probably be faster...
what do you all think?
--jason
On Fri, Jul 20, 2001 at 09:36:35PM +0100, Colin Smith wrote:
>
> I'm running backups on 3 Linux systems, one of the systems is a Cobalt
> Qube. All the backups are done using GNU tar. It works OK but the
> estimation time on the backups is nasty. I think I'll turn off the
> estimation and just run full dumps every day. The Qube is the slow system
> with the problem being related to the Linux filesystem code which reads
> directory entries; The system is a news server and has a couple of
> directories which have a lot of files.
>
> My question is this. Why run a separate estimate at all? Why not just
> monitor the last couple of backups and extrapolate?
>
> i.e.
>
> Day 1 incremental /home 100mb
> Day 2 incremental /home 110mb
> Day 3 incremental /home 120mb
> Day 4 incremental /home ???mb
>
> --
> Colin Smith
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jason Brooks ~ (503) 641-3440 x1861
Direct ~ (503) 924-1861
System / Network Administrator
Wind River Systems
8905 SW Nimbus ~ Suite 255
Beaverton, Or 97008