Lindsay offers good advice on skipped files, and the part they play in how a backup went. Too often, sites try to evaluate backups from standpoint of the TSM server. That's unhealthy, and can result in many missed files - which come dramatically to light when a crunch occurs and restore doesn't restore all that was expected.
It's natural to want the quick, binary answer to the question, "Was node X's backup successful?" But the volume of data involved in a backup does not lend itself to a yes/no answer. Many things can be problematic in a backup, from performance problems in retries of busy files, to files not participating in Linux backups because of character set differences from the client's locale setting, to unintended excludes, or overlooking the need to add DOMain specs for new file systems. If you don't scan the backup and dsmerror.log (via locally written utilities or commercial packages) and compare that to the realities of your file systems, you're not going to see such issues, and may be lulled into a false sense of security. This is to say that the TSM server is not the place to evaluate backups: that needs to be done on the client, by the client administrator, who is responsible for the data. Richard Sims at Boston University
