Thanks Alan, this looks good. This is very similar to the problem we have. For example I run migration on our 3 disk pools to clear them down at 18:00, 21:00 and 22:00 respectively. they finish at 18:32, 21:30 and 22:30 respectively. Can I assume any migration after 22:30 would indicate spill processing is occurring ??
Also I may utilise the 1 GB file limit, but I assume that every time a file is greater than 1 GB then a tape mount would be requested ??? > ---------- > From: Alan Davenport[SMTP:[EMAIL PROTECTED]] > Reply To: ADSM: Dist Stor Manager > Sent: 26 October 2001 13:23 > To: [EMAIL PROTECTED] > Subject: Re: Spill processing please help > > Hello John. I'm in the same situation except that getting extra 3390 DASD > devices for my pool is like pulling hens teeth! (: > Our client backup window starts at 20:00 and in the morning I do this > command from an admin command line: > > Q AC BEGINT=20:00 BEGIND=-1 S=MIGRATION > > This will show me if migration has kicked in during the backup window. > > Also, you can limit the size of the files that go to the disc pool. I have > my limit set at 1GB to prevent large files from filling the pool. The > command to do this is: > > UPD STG poolname MAXSIZE=1G NEXTSTGPOOL=your_tape_pool_name > > If you use the MAXSIZE parm, in this case, any file larger than 1GB will > go > directly to tape bypassing the disc pool. > > Hope this helps! > > Take care, > Al > > Alan Davenport > Senior Storage Administrator > Selective Insurance > [EMAIL PROTECTED] > (973)948-1306 > > > + -----Original Message----- > + From: Doherty, John (ANFIS) [mailto:[EMAIL PROTECTED]] > + Sent: Friday, October 26, 2001 7:56 AM > + To: [EMAIL PROTECTED] > + Subject: Spill processing please help > + > + > + Hi folks I am back on the newsgroup because I desperately > + would like some > + information. We currently run TSM 3.1 on an MVS 390 > + mainframe. As you are > + aware definition of the disk storage pools contain an option > + to migrate to > + the next storage pool (in our case cartridge) if the disk > + pool exceeds a > + specified value. In our case we have these migration > + thresholds set at high > + 90% low 70% this means when the disk pool exceeds 90% migrate > + data until it > + reaches 70% and then stop. I believe this is different from a forced > + migration where you empty the storage pool in preparation for the next > + backup process. > + > + The question I have been asked is how do I know if we are > + spilling into this > + cartridge pool, i.e. how often is the disk pool exceeding 90% > + and forcing > + migration. This being the case we will incur extra > + processing and require > + to increase the size of the DASD pool. > + > + Can anyone tell me how to find out if this spill process is > + occurring. I.e. > + does anyone have a script, batch job etc. on how to determine > + when (if) this > + is happening. Even what I should be looking for to determine > + if this is > + happening would be helpful. > + > + Await your replies > + > + Thanks John > + > + > + > John Doherty > + > Technical Specialist > + > Storage Management > + > Email: [EMAIL PROTECTED] > + > Tel: +44 (0) 141 275 7793 > + > Fax: +44 (0) 141 275 9199 > + > > + > > + > Email communications are not necessarily secure and may be > + intercepted or > + > changed after they are sent. Abbey National Financial and > + Investment > + > Services does not accept liability for any such changes. > + If you wish to > + > confirm the origin or content of this communication, please > + contact the > + > sender using an alternative means of communication. This > + communication > + > does not create or modify any contract. If you are not the intended > + > recipient of this communication you should destroy it > + without copying, > + > disclosing or otherwise using its contents. Please notify > + the sender > + > immediately of the error. > + > > + > ABBEY NATIONAL FINANCIAL AND INVESTMENT SERVICES plc > + > Registered Office Abbey National House 287 St Vincent Street > + > Glasgow G2 5NB United Kingdom Registered in > + Scotland No 159852 > + > > + >
