We tried to go RAW and I don't think we could. Can't remember why - I think it has to do with RH Linux restrictions.
The DB are mirrored across two disk subsystems/controllers. The primary copy is on internal 500GB SATA RAID-1 drives (DB, OS and Logs only) and the mirrors on an 8-drive 750GB SATA RAID-5 sharing the 4TB backuppool LZ. Here is the last expire. 04/26/2008 08:00:17 ANR0984I Process 92 for EXPIRE INVENTORY started in the BACKGROUND at 08:00:17 AM. (SESSION: 36385, PROCESS: 92) 04/28/2008 00:56:26 ANR0987I Process 92 for EXPIRE INVENTORY running in the BACKGROUND processed 5579839 items with a completion state of SUCCESS at 12:56:26 AM. (SESSION: 36385, PROCESS: 92) Here is one of the longer runs: 03/22/2008 11:00:06 ANR2750I Starting scheduled command EXPIRE_INVENTORY (EXPIRE INVENTORY ). (SESSION: 60470) 03/24/2008 09:47:34 ANR0987I Process 485 for EXPIRE INVENTORY running in the BACKGROUND processed 1141005 items with a completion state of SUCCESS at 09:47:34 AM. (SESSION: 60470, PROCESS: 485) "Mueller, Ken" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[email protected]> 04/28/2008 11:45 AM Please respond to "ADSM: Dist Stor Manager" <[email protected]> To [email protected] cc Subject Re: [ADSM-L] DB Bufferpool sizing - continued Zoltan- We also run TSM on a dedicated Linux/Intel, although on a smaller scale than you. The server has 2.5GB RAM and two 2GHz Xeons. Our DB is 23G of which 70% is in use. Buffer pool is set at 32M (8192 pages) with selftune on. Expiration is run daily - typically completes in 4-7 minutes! I'm curious why your experience is vastly different. Are your db volumes part of a file system or raw volumes? Ours are file system volumes (ext3). Given Linux's propensity for using unallocated RAM as file system cache and the generous memory this server has, a significantly higher number of DB pages may be in memory than meets the eye. Using raw volumes bypasses this caching. Perhaps the Linux file system cache management is more efficient than the TSM buffer pool management - maybe the adage 'less is more' in terms of TSM buffer pool applies here? How many objects get processed during your expiration runs? We're in the magnitude of 10s of thousands of objects. -Ken -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Zoltan Forray/AC/VCU Sent: Monday, April 28, 2008 10:41 AM To: [email protected] Subject: DB Bufferpool sizing - continued Pertaining to the recent discussion of buffpool sizing (larger vs smaller - cpu usage, etc) I would like to get some opinions. I followed the discussion and was aware of the issue of going too large and killing any benefits by increased CPU usage and such. So, I have been experimenting. My big Linux TSM server has 8GB RAM and is dedicated to TSM, so other products using/sharing the memory is not an issue. The TSM DB is 160GB assigned with 77% used. The buffpoolsize (with selftune) was set to 768MB. I just bumped it to 1GB (still at the 1/8 of real memory guideline) to see if it would improve things, using EXPIRE INVENTORY as a benchmark and trying to ignore the "Cache hit%" value (which I could never keep at or above 99%). Before the change, I was seeing EXPIRE INVENTORY running 46-51 hours. This past weekend, it ran 40H. I realize that having only 1-test result after the change is hardly a good indicator of either positive or negative results. So, my question is this. Do I bump it to 1.25-1.5GB (still well below the "guidelines" of 1/8-1/2 of real memory) and see if that improves things even more or do I drop down to 512MB to see if it improves things even more than the memory increase? Anyone else willing to share their DB sizes/buffpool/expire inventory durations? Open to all thoughts (besides the need for another TSM server......this is already in the works and we have stopped adding new nodes to this server).
