VTL and over subscription as I understand it. Definition: When (tape volume size) X (number of defined volumes) > native capacity of VTL you have over subscribed. If you try to fill-up all your defined volumes to their defined native capacity you will fail as you will run out of space on your VTL.
Why would one want to over subscribe? If you define a large tape volume size (i.e. 100GB), and only want to write 10GB to a tape then yes it would be neat if the VTL only allocated the actual space written to virtual tape volume (i.e. 10GB). When would this be beneficial in the TSM application? 1) TSM DB backups: Why waste a 100GB volume for one 20GB backup? 2) Using Collocation: If you collocate a client with 50GB of space then why waste a 100GB volume on the client? But the problem is as you point out, when you move a tape from pending to scratch the getting the VTL to reclaim the space previously allocated to the virtual tape volume involves: A) Checking out the scratch volume from TSM (so it will not attempt to use it during the following steps) B) Delete the volume from the VTL (this returns the space to the VTL) C) Redefining tape volume to the VTL D) Checking in and labeling the redefined volume into TSM (I imagine that a VTL could replace steps B & C by truncating a volume, but you would still have to get TSM to rewrite the truncated label.) This is not a procedure I wanted to manage in a manual or automated manner, so I chose the following: 1) Define small virtual tape volumes (i.e. 10GB) 2) Do not use collocation 3) Do not over subscribe I have found tape mount time to be insignificant and the smaller virtual tape sizes to makes collocation unnecessary. This is just my way of managing the trade offs. Thanks, H. Milton Johnson -----Original Message----- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Neil Schofield Sent: Monday, June 11, 2007 5:50 PM To: [email protected] Subject: Re: [ADSM-L] How to Incorporate a CDL into a TSM environment? Hi there We too are in the throes of a debate about virtual vs. physical tape libraries. On the VTL side, much is made of the ability to over-provision the disk capacity - eg a 100Gb virtual tape will only occupy as much space on disk as has been written to it. As a result, so the theory goes, we need only consider the occupancy when sizing the VTL. In a TSM environment, this seems to be wrong on a number of counts. - We still need to take into account the overhead of the reclaimable space on a virtual tape. This can be managed by varying the reclamation thresholds, but not eliminated. - A pending delete volume will still occupy an underlying disk capacity equivalent to its size. - Since the conversion of a pending delete volume to a scratch tape takes place purely in the TSM database, a virtual scratch tape will also occupy the full disk space on the VTL. until it is re-used. So am I correct in thinking that in the whole scratch, filling, full, reclaim, pending, delete, scratch lifecycle of a storage pool volume, the only time that we get the benefit of the over-provisioning is when it's filling? In our current physical tape environment (with collocation at the node level), only about 20% of volumes are filling. Ignoring de-dupe for now, does it seem reasonable to base the sizing for a replacement on the total physical tape capacity of the existing library and some estimates of expected growth. Regards Neil Schofield Yorkshire Water Services Ltd. ----------------------------------------- Are you a cactus or a sponge? Go to the on-line quiz at http://www.yorkshirewater.com/becool to find out how water efficient you are. Only available in Yorkshire. YORKSHIRE WATER - WINNER OF THE UTILITY OF THE YEAR AWARD 2004, 2005 AND 2006 The information in this e-mail is confidential and may also be legally privileged. The contents are intended for recipient only and are subject to the legal notice available at http://www.keldagroup.com/email.htm Yorkshire Water Services Limited Registered Office Western House Halifax Road Bradford BD6 2SZ Registered in England and Wales No 2366682
