I've said this before, but I'll repeat. Based on painful experience, it makes a HUGE difference getting TSM to backup a Windows fileserver with 6TB and millions of files located on 2 monster volumes versus getting it to backup the same amount of data spread across 6 or 8 (merely) large volumes. Journal databases are a case in point. My suggestions are: 1. Limit volumes to no more than 600GB each (use DFS if necessary) 2. set memoryefficientbackup yes 3. Turn on journaling (backup a volume at a time if needed to get the journals to initialize) 4. If there are a lot of .pst files stored, consider using subfile 5. Upgrade to x64 Windows when possible Steve Schaub Systems Engineer, WNI BlueCross BlueShield of Tennessee 423-535-6574 (desk) 423-785-7347 (cell)
-----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Bos, Karel Sent: Tuesday, December 12, 2006 10:15 AM To: [email protected] Subject: Re: [ADSM-L] JBB Question(s) Hi, I partly agree with the OS statement. Partly because I find it difficult to explain why an OS is able to contain one disk with +6 million files but the back-up application isn't able to get the back-up stable (journaling) or working at all (normal incremental) without using time consuming options like memory efficient. Splitting a disk over multiple nodes means hard coding the subdirs under root in the opt files. If a system administrator puts new data in a different folder, this data will be mist. Work around, adding a extra node which has a exclude.dir for all dirs already being managed by the other nodes. But what if the root is de container of all data? Meaning, the profile disk of a windows box with all profiles (6000+) in the root of the disk? Do I really want to be force to configure multiple nodes + one extra to get the backup of this monster running? MemEff runs for over 36 hours and Journal db is +2 GB within 24 hours. Regards, Karel -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark Stapleton Sent: dinsdag 12 december 2006 15:46 To: [email protected] Subject: Re: JBB Question(s) From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Otto Chvosta >After that dbviewb reports that the journal state is 'not valid'. So we >tried a further inremental backup (scheduled) to get a valid state of >the journal database. >This incremental was stopped with > >ANS1999E Incremental processing of '\\fileserver\q$' stopped. >ANS1030E The operating system refused a TSM request for memory >allocation. > >We tried it again and again ... same result :-((( From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Schaub, Steve >Add this to the dsm.opt file and run the incremental again: >*====================================================================== * > >* Reduce memory usage by processing a directory at a time (slower) * > >*====================================================================== * >memoryefficientbackup yes > >Large windows fileservers with deep directory structures often exhaust >memory trying to traverse the entire filesystem during the initial scan. >This option scans the filesystem in chunks. To add a bit of detail: All modern Windows versions (except possibly Vista) have a hard-set limit of total memory that can be dedicated to a single process thread. (I believe it's 192MB, but don't quote me on that.) It is a hard limit that cannot be gotten around. Steve's workaround is an option. The other option is to use two nodenames for the same machine, with two option files/sets of TSM services/etc. One node backs up half the machine (by using include/exclude lines in the option files), and the other node backs up the other half. The real fix? Use a real server OS. -- Mark Stapleton ([EMAIL PROTECTED]) Senior TSM consultant Please see the following link for the BlueCross BlueShield of Tennessee E-mail disclaimer: http://www.bcbst.com/email_disclaimer.shtm
