there can be a compounding of tape usage and excessive backup time
caused by poor backup system design & implementation.
This merits explaining:

Streaming tape drives (virtually all modern drives including Exabyte,
DDS-DAT, AIT, VXA, DLT, LTO etc) have a real-time requirement for
data flow. If the backups system is not able to keep the tape drive buffer
above its low water mark, the drive will start writing extended record
gaps (tape with no user data; the lexicon and semantics vary from
vendor to vendor) in order to avoid stopping the tape motion while
waiting for data. Obviously this consumes more tape than if all data
records were written contiguously without record gaps. The drive firmware
will, eventually, stop the tape motion after a predetermined (or programmable)
delay to conserve tape, while waiting for enough data to arrive in the buffer
before starting to record on tape again. Unfortunately, before starting to record
again, the drive must rewind as small segment of tape and start moving forward
again so the the trailing portion of the recorded data (drive data, not user data) 
can be located and over-written to "splice" the new records onto
the existing records...or words to that effect. If this all happens frequently,
then overall drive performance drops through the floor. Second generation
cartridge drives may have adaptive firmware, large buffers, variable speed
tape transports etc. to reduce the effect of "shoe shining" but it is a
fundamental limitation of streaming tape technology.

The common sources of data starvation, which cause this shoe shining
(aka back-hitching) are
        * slow or overloaded network links to data source (remote system)
        * overloaded or inadequate CPU resources on backup system
        * overloaded or inadequate CPU resources on remote systems

The very best backup system has its hard disks attached to the same CPU as
the tape drive(s), an adequately scaled backup CPU with no other applications
running and no network activity which would cause the backup system to become
unresponsive for periods of time. As much as practical, the backup system should
be a dedicated, standalone system. Anything less is a compromise and the administrator
must be very aware of the degree to which the system is being degraded.
Certainly there will be complaints that this is unrealistic. Perhaps, but it
is a undisputed guideline for backup system design.

Degraded backup system performance, whether related to CPU, memory, network,
storage or other applications hogging resources will have two primary effects,
both at the same time:

        * excessive tape usage
        * very poor backup performance (slow)

Staging data from disk on the remote system to disk on the backup system
can overcome these problems but it is an extra step (or several depending
on the situation).

Before impugning Retrospect or even the tape drive for poor performance,
it is often worthwhile to test making the same backup (same data, same drives)
with the data locally attached to backup server.

...more of a speech than anyone wanted to hear.
I'm going to yield back the remainder of my time to the list...

Houston TX

>\Hi Backup People,
>I have set up one of my clients with an Ecrix VXA tape drive using the above
>media. It was my understanding that with the hardware compression built into
>the drive turned on I would get the full 66 Gig capacity. My client has just
>rung me to say that Retrospect is requesting a new tape & a quick inspection
>(over the phone via my client) of the original tape in the scheduled backup
>set revealed that it has used 33 Gig. I will have to go & investigate this
>but could someone who knows please tell me if I need to have software
>compression turned on to get 66 Gig out of the tape or is there something
>else making Retrospect ask for another tape.

To subscribe:    [EMAIL PROTECTED]
To unsubscribe:  [EMAIL PROTECTED]
Archives:        <>
Search:  <>

For urgent issues, please contact Dantz technical support directly at
[EMAIL PROTECTED] or 925.253.3050.

Reply via email to