Lindsay Morris wrote: >You know, that randomization thing only works if the client is using POLLING >mode, which most people don't like. >If you're using PROMPTED mode for most/all clients, RANDOMIZATION is >ignored. So, to keep from slamming your network, you have to make an 8PM >schedule with 20 nodes, and a 9PM schedule with 20 more, etc. > >Of course, nowadays many people have a dedicated backup network for their >larger clients (SAN or maybe just another 100Mbps switch), so slamming a >network that you own is probably OK. > > > >>-----Original Message----- >>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of >>Mark Stapleton >>Sent: Monday, September 30, 2002 2:48 PM >>To: [EMAIL PROTECTED] >>Subject: Re: Discussion: 1 server sched for all vs 1 for each node >> >> >> >> >>>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of >>>Markus Veit >>> >>> >>>>if you have several client nodes, would it be better to have one client >>>>schedule >>>>were all nodes are in or would it be better to have one schedule >>>>per node spread over time. >>>> >>>> >>TSM uses a randomization factor in all schedules, so as to avoid multiple >>clients all hitting the server at the same time. You didn't >>specify which OS >>platform, so I'll point you at the Windows 5.1 Administrator >>Guide page 376 >>for details. >> >>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of >>Tomas Hrouda >> >> >>>by my opinion you must not make extra schedules for simple >>>clients, because >>>files are migrated form diskpool "per node" at the time. If several >>>migrations occured during backups, I thing your diskpool is too small. >>> >>> >>This is incorrect. >> >>-- >>Mark Stapleton ([EMAIL PROTECTED]) >>Certified TSM consultant >>Certified AIX system engineer >>MCSE >> >> >> >>>Hope this helps. >>> >>>Wiht best regards >>>Tom >>> >>> >>> >>>Any feedback would be appreciated. >>> >>>Mit freundlichen Gr|_en / Best Regards >>> >>>Markus Veit >>> >>> >>>= >>> >>> >>> Mark, Lindsay, and the rest,
Lindsay is right "RANDOMIZATION" only applies to "schedmode polling", however there is no need to break up the schedule as Lindsay suggests. It doesn't matter if you have 200 clients you can schedule them all to start at once (assumming "schedmode prompt"), the controlling factors will be "maxsession" and "maxschedsessions". Let's assume you have "maxsessions = 50" and "maxschedsessions = 80%" (remember maxschedsessions is set in %, but q stat shows the calculated number) then no more than 40 schedule sessions will start at the begining of the schedule (backup window). Then when a session becomes available (a client schedule has completed) the server will start another client. This is the prefered way to do your schedules since once you determine what the server and network can withstand then you can set the "maxsession" and "maxschedsessions" values accordingly. Therefore you will get all 200 of your clients backed up as rapidly as possible without you trying to figure out how to load balance. You will find that your backup windows become much smaller as compared to using "polling" since this methodology brings the server and network upto peak performance values and maintains it for as long as possible. Of course there are some considerations, this assumes that all of your clients are sending data to a diskpool. If you are going straight to tape then obviously the nunmber of drives will dictate the number of simultanous connections. Never set "maxschedsessions" to 100%, you want to always leave some sessions available for admin sessions. The value "maxschedsessions" can be set on the fly if needed or by admin schedule. Also, the original post made reference to how the migration process might be affected by this, I think Tomas had the right explanation, but I will explain it in a little more detail. When migration occurs, the ITSM server looks for the largest "eligable" file space backup (FSBU, which are all the files backedup/archived for that FS during a given session, BTW I just made that acronym up so don't look for it in the documentation) and migrates that entire FSBU as well as all other "eligable FSBU for that node. It then checks to see if it has satisfied the "low migration value", if not it looks to migrate the next node based on the same criteria. Now what makes a FSBU "eligable" is based on two storage pool options, "MIGDelay" and "MIGContinue". MIGDelay specifies the number of days that the FSBU must reside in the storage pool before it is eligable for migration. MIGContinue says if after migrating all of the eligable FSBU you are still not below the "low migration" value do you continue to migraste ignoring the MIGDelay value. If MIGContinue is set to yes and all eligable FSBU are migrated then ITSM server effectively decrements the MIGDelay value by one and repeats the algorhythm until "low migration"is satisfied. So the short answer to the original question is, if they are all schedules moving data to a diskpool I would go with one schedule and controll it with "maxsession" and "maxschedsessions", even W2K based servers can easily handle 50-75 sessions at a time and a mid-range Unix server can handle 150-200 sessions at once. The bottle neck is often the network throughput, but remember just because you have 100 sessions they are not all moving data all the time. This is an intersting topic I would like to hear what others are doing in this regard as well. -- Regards, Mark D. Rodriguez President MDR Consulting, Inc. =============================================================================== MDR Consulting The very best in Technical Training and Consulting. IBM Advanced Business Partner SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux Red Hat Certified Engineer, RHCE ===============================================================================
