On Thu, 29 Aug 2013 12:53:56 -0500, Mike Schwab <mike.a.sch...@gmail.com> wrote:

>See below.
>
>On Thu, Aug 29, 2013 at 8:15 AM, Lizette Koehler
><stars...@mindspring.com> wrote:
>> We are beginning to investigate the possibility of having a DLm and Data
>> Domain tapeless solution in our shop.  We are just looking
>>
>> If anyone in a medium to large shop is using this, and you would like to
>> share your observations with me, that would be great.  We have about 1PB of
>> tape storage (mostly HSM ML2 data) between my two data centers.
>We have 4.7T for 10,000 volumes * 50 volumes ranges. 235TB total, 61%
>active volumes.  Make sure you activate TTL (time to live) scratch
>tape management (when it reclaims scratch volumes as free space).
>
>> And I would
>> like that data to be replicated to both devices.  I will also have long tern
>> retention needs for some  of my data.  I need to have my primary tape data
>> in the secondary data center for DR.  It would be nice to have my critical
>> development data sent to the primary site just in-case the DR site is the
>> one that is down.  Mostly my Source Management files.
>>
>We duplicate all VTapes from a DLm 960 to DLm 4080s.  I don't think we
>have dedup installed.
>>
>> EMC is suggestion an DLm8000 family and Data Domain for dedup of the data
>> for the mainframe.
>>
>> Some questions I might be interested in
>>
>> How is the performance when the data has to be rehydrated?
>>
>> Is there any significant impact on distances between DD + DLm for DR usage?
>> What size transmission pipe will make it happy?  Our Primary and secondary
>> sites are about 800 miles apart.
>>
>Our sites are about 200 miles apart.  Our fiber falls behind during
>our evening batch backup window, but catches back up by 7am.  It is
>Async, so it is not waiting for responses.  We are upgrading the
>fiber, last link won't be in for another 18 months.
>
>>

One of my clients uses DLm960.  2 of them in primary and at DR site,
each with about 550TB (1100TB per site) with 4 VTEs (virtual tape engines,
which are blades that contain the virtual tape drives), doing full replication 
of all 
virtual tape.  That includes test data because going back years ago
there was just too many instances of missing tapes at DR and standards
not always being followed.  Prior to DLm, Oracle/STK VSM was used and
we recovered 100% of the tape at DR, so we just followed the same 
philosophy.   80/20 rule anyway... maybe more like 90/10.  IOW, only
10% is probably test data anyway so better safe than sorry.  My 
client doesn't have data domain.  IIRC there was analysis done prior
to implementation and it was determined that data domain wasn't 
beneficial enough in the environment to offset the cost. 


>> Are there any concerns or issues that might be good to know up front?  Any
>> lessons learned.
>>
>We did not do TTL.   We ended up with some file systems with lots of
>little volumes, chewed up the scratch tapes, had a lot of free space,
>and was attracting all the writes.  Be sure to implement TTL to keep
>the file systems balanced.
>>

One size does not fit all... my client is not using TTL.   Newer versions 
of the DLm code also do things to keep the usage more even.   There 
are other options ("penaltyup" vs "roundrobin") in the scratch tape 
allocation that can be configured per VTE (virtual tape engine) as well.
I actually use a combination of both (half the VTEs at each setting). 
I am still on an older version of DLm (VTE) code - 2.4 I think and you
would start out on a newer / better version where the file systems
on the back work differently.  

Planning is important. The volser ranges of virtual tapes for example. You
want to way over allocate in terms of pure numbers and not let it be a 
limiting factor later.   You can always add ranges, but then you will have a new
file system that starts off unused or very little used compared to all your
other file systems.   Also, you don't want a a "large" virtual tape size in a
very small environment.  For example, if the total backing file systems
are only 500GB, you really don't want your virtual tape size to 
be 50GB.   When DLm goes to mount a scratch, it has to assume you
will write the entire virtual tape size.  So imagine a back end tape system
with 500GB and 300GB used (plenty of free space, right) - 5 concurrent
tape mounts for scratch would be a problem since that totals 250GB.
Ask me how I know.  ;-)     Close to a true life example.  My client's
DLm environment houses 8 sysplexes / monoplex tape environments
and initially we used the same size virtual tape for all based on 40G
STK 9840 physical tape (previous environment was a mixture of VSM
and 9840B/C, with HSM on physical 9840).   Made sense to us... and
of course the EMC DLm implementation team didn't think of this
or warn us.

Anyway, if you search the archives for DLm and Zelden
you should find some other posts of mine also that may be
helpful (or not).  

HTH,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS       
mailto:m...@mzelden.com                                        
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html 
Systems Programming expert at http://expertanswercenter.techtarget.com/

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to