On 10 nov 2009, at 19:03, Kelly Lipp wrote:
And most importantly, if you get a room full of TSM gurus, IBM types and folks like us, nobody will really agree on a hard number.
there is a hard number.... 530 :) Now as for recommendations.... that's a totally different story.
What I have seen during 10 years and hundreds of TSM environments is that the number has gradually increased. When I first got into this business, 50GB was huge. Now that's nothing. The more normal is 100-150GB and things seem to be working just fine.
Well, as you've been teaching, we used to keep in mind that we needed to do a database audit. Since the last few major releases quality has improved a lot, so audit's are not so much an issue any more. But if you do need to audit... 2 to maybe 3 GB an hour is about the best you can expect, depending... So depending on how long you can take things off-line, you may want to keep your database smaller that the hard limit of 530 GB.
I recall a period of time, soon after Dave Cannon arrived, that IBM's engineering focus was on quality. They did a ton of great work fixing what ailed the product. We could see the results afterwards. This work set the base for what we're seeing today (up to 5.5.x anyway). Couple that with huge improvements in hardware performance and our favorite product has grown up nicely. Our problems are so much different than everybody else's. They're still worried about getting the weekly full backup done they can't think about anything else. That or adding the next freaking band- aid to their already half assed (and declining) solution.
-- Met vriendelijke groeten, Remco Post [email protected] +31 6 248 21 622
