>...  The tape
>device is a SUN L9 (DLT8000 with autochanger) 40GB (/dev/rmt/1n).  ...

We'll come back to this later ...

>I have managed to backup Helios and parts of Apollo using dump uncompressed.
>I want to backup the user partitions compressed with gtar.  ...

Why do you want to switch to gtar?  It seems to be the root of your
problems.

And adding software compression (GNU tar or not) is going to kill you.
A single typical 40 GByte full dump here takes 6+ hours just to do the
software compression (on an E4500 or better).  With the amount of data
you have, I'd stick with hardware (tape drive) compression.

And make sure you don't accidentally do both -- writing software
compressed data to a hardware compressing drive actually uses more space
than either one by itself.

>The Apollo fails to return an estimate and fails to backup. The estimates
>are taking about 6hrs to complete.  ...

Ick.  I'm sure that's because GNU tar does not perform estimates as fast
as ufsdump can.

One possibility, if you're bound and determined to switch :-), would be
to get GNU tar 1.13.19 from alpha.gnu.org.  That seems to be a stable
version and may perform the estimates better.

There is also the calcsize approach, but let's leave that for the moment.

>... Helios partitions that don't fit on the
>holding disk take forever to put on tape.  ...

Using GNU tar or ufsdump?

Could you find the amdump.<nn> file that goes along with the other
files you sent and pass that along, too?  It contains all the timing
information.

>...  There is also an Index Tee [broken pipe] error on Apollo.

That probably corresponds to a tape error indicating you hit EOT.
When Amanda hits EOT and it is running direct to tape (not writing a
file from the holding disk, but getting it directly from the client), it
shuts down the client connection, which triggers the broken pipe message.
Essentially it can be ignored as a side effect.

I noticed the following line in the sendsize*debug file you sent for
the full dump estimate of apollo:/export:

  Total bytes written: 78385356800

Unless you get some amazing compression, you're not going to get this on
a single tape, and since Amanda doesn't yet do tape overflow, it's going
to be a problem.  This is the one reason you might have to go to GNU tar
(although you'd only have to do it for this one file system).

>How do I speed the size estimation up ...

Don't use GNU tar.

>... without adding holding disk how do I speed up the dumping to tape?  ...

Not sure about that one, yet.  The amdump.<nn> file will explain better
when Amanda is "fast" and when it is "slow".

>...  How long should a backup of this size take (see
>helios_partition and apollo_partition)?

It depends on what level backups Amanda picks (i.e. how much data is
actually backed up).  Here are last night's numbers from one of our
large configurations (Sun E4500 (or better) class machine, 400 MHz CPU's,
DLT7000 drives, large -- 100 GByte -- holding disk, ufsdump/vxdump and
tape/hardware compression):

  Estimate Time (hrs:min)    2:06
  Run Time (hrs:min)         9:10
  Avg Dump Rate (k/s)      4164.5     6446.9     3687.6
  Tape Time (hrs:min)        6:54       1:51       5:03
  Tape Size (meg)         148419.5    39705.7    108713.8
  Avg Tp Write Rate (k/s)  6122.1     6105.8     6128.1

The next two "smaller" (although not by much) configurations get
roughly the same performance.

>Sheldon Knight

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

Reply via email to