You might consider (a) image backup (in concert with incremental), (b)
journaled incremental or -INCRBYDATE during the week (in concert with image
and/or full progressive-incremental on the weekend).  Some folks like doing
monthly image (on a weekend) for mission critical file servers, then daily
journaled-incremental and weekly full-progressive-incremental.

You should get 5-10 GB/Hr on large file sever with lots of files;  I've done
12 GB/Hr on a benchmark-configured system (that was on NT, before Win2K --
which some report should be faster)... the key issues are (a) TSM server
speed in handling large quantities of files -- set your aggregate larger
(they recently increased max. transaction size to 2 GB), and (b) file server
capability in processing thru its directories (Unix is generally faster than
Win2K), limiting each file system to under 1 million files/directories (and
under 200 GB total size) helps... smaller becomes faster.

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Dallas Gill
Sent: Thursday, June 20, 2002 9:27 PM
To: [EMAIL PROTECTED]
Subject: Small files V's Large files


Can anybody share with me the secret to getting good performance with small
files like I get with I big files, I know that I will not get the same
performance but I would like to think that I would be getting at least half
the throughput that I get with large files. I am getting approx 1GB per
minute for large files (20MB and bigger) and about 1GB per 10min for small
files. Can anyone help. Thanks Dallas

Reply via email to