Del has given you the same analysis that I would have.  I would arrive at
the same numbers.  Remember though that Del is talking about actual data
backed up at 50MB/sec.  So if your actual data in the database is 60% then
the total time would be adjusted accordingly down from 16+ hours.

-----Original Message-----
From: Del Hoobler [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 15, 2002 12:37 PM
To: [EMAIL PROTECTED]
Subject: Re: 2.8TB SQL Server Database Backup


> I have a potential customer running a 2.8 TB SQL Server database on an
> 8-way NT server.  What can I realistically expect to achieve in
> maximum backup throughput using TDP for SQL Server?
>
> Assume the 4-way Solaris TSM server has 8 AIT-2 tape drives dedicated
> to getting this backup performed without any competing resource
> constraints.  FYI AIT-2 run at 6 MBps native and the cartridges are 50
> GB each.

Joshua,

Some of my thoughts on the subject...

In some performance tests that were run, TDP for SQL achieved over 50MB/sec
with 4 stripes...with plenty of CPU capacity still left on the SQL server.
We believe we were being limited by the I/O subsystem on Windows.

In any case, if the tape drives in this case have a throughput rate of only
6MB/sec, I think it possible that we can drive all 8 at that rate for an
aggregate throughput of 48MB/sec (using 8 stripes). This would translate to
approx 172 GB/hr which would require 16.3 hrs to complete a full backup of a
2.8TB database. If these drives have good hardware compression, then perhaps
the overall throughput can be improved...but it seems that in our scenario,
the I/O subsystem on Windows was the limiting factor.

*** Disclaimer: I am not guaranteeing anything with these numbers... They
are simply a statement of what we saw during performance testing.

I hope they help.

Thanks,

Del

----------------------------------------------------

Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

- Leave everything a little better than you found it.
- Smile a lot: it costs nothing and is beyond price.

Reply via email to