I have experimented with resourceutilization values above 10, but we do not use them in production. You are correct that IBM does not support them and the documentation says the highest value is 10. As far as I can tell, they don't hard code any limit. If the communication method is TCP, I have tested it at 60 and seen 50 some threads created. Although, as you say, the greater number of threads just had to wait longer and didn't produce any faster aggregate speed. If the communication method is shared mem, then the limit is 30 on Solaris. You can set resourceutilization as high as you want, but you won't start more than 30 threads. This is a consequence of the fact that dsmserv allocates a fixed memory buffer for client-server communication. This is only sufficient to sustain 30 client threads. Once clients are at/near the limit of 30, you will get a shared memory error if you try to start a client. The fixed memory buffer size used by dsmserv was coded many years ago when prevailing server memory sizes were much smaller than today. I submitted an enhancement request to increase this a few weeks ago.
dsmc threads increase and decrease throughout the backup job. It starts one producer threads for each file system that it backs up. Each producer thread can start one or more consumer threads. The number of consumer threads seems to depend on how busy they are kept. As more files are found by producer thread that need to be backed up and all consumer threads are busy, then it will spin up more consumer threads. Conversely, once the demand for consumer threads subsides, it will eventually kill some of them. Resourceutilization parms above 10 offer potential benefits by starting more producer threads, assuming that the backup time is bound primarily by the time to examine all the files. This could also be achieved by starting separate dsmc backups for specified file system sets. Roy J. Martin mailto:[EMAIL PROTECTED] -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Roger Deschner Sent: Tuesday, June 06, 2006 10:56 AM To: [email protected] Subject: Re: ? Should we set Resourceutilization > 10 if appropriate ? At one point I was tempted to do this, with a client who backs up 1tb/night. But it would not help, in fact things got slower as the numerous processes competed with each other and created worse disk I/O contention on BOTH the client and the server. Bandwidth utilization measured at the client NIC actually declined. You might have heard me whining about this problem on this list back in February of this year. What was effective, was attacking the backup performance problem at its source, by tuning the disk I/O subsystems on both the client and the server. At the server level, I had to work on both the disk storage pool which that client was backing up into, and the TSM database. There are no silver bullets here, but there might be bronze bullets - look at raising your TSM DB bufpoolsize, and also your OS' settings for disk buffers on the TSM disk storage pool volumes. More effective than that, however, was simply buying more disk drives for both the database and the storage pool, and spreading the I/O load out farther. The disk tuning worked. This client is now backing up in a reasonable time with RESOURCEUTILIZATION 10. But I watch it carefully. One of my key measures of TSM server performance is how long this huge client takes to back up. Roger Deschner University of Illinois at Chicago [EMAIL PROTECTED] On Mon, 5 Jun 2006, James R Owen wrote: >Andy, et al. >[Is anyone out there using RESOURCEUTILZATION n w/ n > 10 ??] > >Andy Raibeck's presentation @ Oxford's 2001 TSM Symposium > http://tsm-symposium.oucs.ox.ac.uk/2001/papers/ > Raibeck.APeekUnderTheHood.PDF >includes this table showing what will result from setting: > >RESOURCEUTILIZATION n Max.Sess. ProducerSess. Threshold(Seconds) >--------------------- -------- ------------ ------------------ > <default> 2 1 30 > 1 1 0 45 > 2 2 1 45 > 3 3 1 45 > 4 3 1 30 > 5 4 2 30 > 6 4 2 20 > 7 5 2 20 > 8 6 3 20 > 9 7 3 20 > 10 8 4 10 > (undocumented: > 11<=n<=100) n 0.5n 10 > >and also includes these warnings: >Undocumented, internal values subject to change without notice. >RESOURCEUTILIZATION > 10 is unsupported. > >Management discourages use of undocumented/unsupported settings, but >I'm arguing that we need to specify RESOURCEUTILIZATION 30 in order to >effect efficient backups for our email servers: > 4 IMAP servers, each w/4 CPUs, running linux client 5.2.3, > each backs up 15 FS sending ~200GB/night (compressed) > via 100Mb -> Gb ethernet > to our TSM 5.2.3 service's disk stgpool > >With RESOURCEUTILIZATION 10 specified, we never see more than 8 >simultaneous FS backups, and some of 8 large IMAP filesystems are >always the last backups to start, serially after other smaller FS >backups complete! Testing w/ RESOURCEUTILIZATION 30 causes one client >to start up 31 sessions enabling all 15 FS backups to start essentially >simultaneously. I expect the smaller FS backups will complete first w/ >the 8 larger IMAP FS backups completing later, but w/ all FS backups >for each client finishing faster because none will wait to start >serially after other FS backups because insufficient backup sessions >were started. > >Asking only for your own advice, recognizing IBM probably does not >allow you recommend using unsupported/undocumented optional settings: > >Is my understanding of the unsupported/undocumented setting (w/ N>10) >correct? Are we risking some unanticipated problems trying to use >RESOURCEUTILIZAION 30 to backup all four of these email servers >simultaneously? [I believe we have sufficient network bandwidth, disk >I/O capacity and CPU's for TSM clients and service.] > >Is there some important reason that IBM did not choose to document and >support N>10 for RESOURCEUTILIZATION? [The higher settings would seem >to be useful and appropriate for some high-bandwidth circumstances, or >did I miss something?] > >Is there a simple way to specify the order in which FS are selected for >backup when multi-threading is active? > >Thanks [hoping] for your advice! >-- >[EMAIL PROTECTED] (203.432.6693) >
