1) Is there any particular reason to set a max file size for a disk stgpool? (Assuming a setup where disk stgpool will migrate to tape stgpool)
>>>The rationale for setting a max file size goes something like this: Suppose you have mostly small files backing up, and just a few humongous ones: say you have a 50 GB disk pool, and one data base that backs up 30 GB per night. Well, when that 30 GB DB hits your disk pool, it immediately makes your pool 60% full. When migration starts, it will either displace a LOT of other files, or migrate the big thing out first, anyway. So people who want most of their daily backups to stay available in the disk pool set a max file size to keep that big chunk from causing too big a spash in the pond. Or another way to look at it, if you know that one piece is so big it's going to migrate out first anyway, why spend the cycles to move it twice, just go direct to tape. It's just another option you can use, if needed, to manage your particular situation. 2) Should the TSM server have its own stgpool for backing up itself? >>>No need. 6) To *SM, all backups are incrementals (except for the first backup of a new client), is my general understanding. Is there a way to force a full backup of a particular client as an one-time operation? I'm guessing maybe not, but thought I might try asking, anyway. :) >>> Yes. If you are using the GUI, look at the box at the top of the window. INCREMENTAL (in most cases use "incremental complete") just backs up changed files. Pull down ALWAYS BACKUP instead to force a backup whether changed or not. If you are using the command line, it's "dsmc incremental" vs. "dsmc selective". Remember though, if your management class/backup copy group sets a limit on the number of versions you retain for files, that doing forced backups affects that plan. Some of your retained versions will now be identical (and therefore useless...) 7) The biggest single question... I don't have a real good understanding of the purpose of copy stgpools. I've read a lot of documentation -- hundreds of pages of multiple docs, re-read, read old adsm-l mail, Google searches, etc... but still just don't quite 'get it'. I can set up HACMP clusters, debug really obscure things, but this eludes me. ;) >>> Two reasons for creating copy pool tapes: 1) Protection from physical tape damage. NEVER TRUST YOUR TAPE DRIVES completely. On VERY RARE occasions, even the highest-quality, enterprise class tape drives will eat a tape. Chomp. The less expensive your tape drive, the more frequently it happens. The higher-capacity tape, the more data lost when it does. As a storage manager, a cardinal rule is NEVER RELY on having just 1 copy of a tape. Period. So people who aren't even doing offsite vaulting for disaster recovery (yet) should create a copy pool 2) Off-site vaulting. People who want to be able to recover their data at a disaster site, have to send copies of the data somewhere that is not the same physical location as the primary tape pool. MANY people copy their stuff to copy pool tapes daily or weekly, pull the copy pool tapes out of the robot, and send them to an offsite vault. That was your data still exists, at least in the vault, after a facility disaster.
