Ron,

  Thank you very much. That's precisely the type of info I needed.

Thank You,
Dave O'Brien
NIH Contractor
________________________________________
From: Ron Hawkins [ronjhawk...@sbcglobal.net]
Sent: Friday, January 27, 2012 10:03 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: 3390s on SAN?

David,

Firstly, I work for HDS.

I'm assuming you mean that the VSP is the storage in the non-mainframe dept,
and the aging HDS DASD is a 9980V or earlier controller.

In order for the VSP to support CKD volumes you will need to install a FICON
feature. This is two boards for a total of 16 FICON ports. These will take
up two half slots in the controller, which means they can share the slot
with a fibre channel board. You cannot define CKD volumes in the VSP unless
there are FICON boards installed.

I'm not sure what your capacity demands are, but if you are at less than
100,000 4KiB IOPS you should be OK with these boards using just two ports
per board. The architecture change in the VSP means you need to be sure you
have enough Virtual Storage Directors (VSD) to handle the total IO load
though. VSD is a fancy name for Microprocessor Board - we don't put the MP
on the Front or Back end cards any more. If the dark side is worried about
sharing these you can assign CKD volumes and LUNs to different VSD so the
activity is physically separated. That's not a recommendation, it's just
something you can do to appease any concerns.

CKD volumes are formatted into a whole parity group. A parity group can be
4xHDD (R5 3D+1P, R10 2D+2D), or 8xHDD (R5 7D+1P, R6 6D+2P). Then there is a
method called concatenation that allows you to widen the RAID 10 stripe to
8xHDD, and the RAID 5 stripe to 16xHDD or 32xHDD. And finally there is a new
facility called MF-HDP (Hitachi Dynamic Pooling) that can "chunk" a volume
across up to 2KxHDD, where a chunk is 672 tracks. MF-HDP is supported on
RAID-5 and RAID-6 (recommended).

MF-HDP creates virtual volumes in a HDP Pool. The sum of the virtual volumes
can be greater than the actual parity group(s), but I'd dredge up all my old
Iceberg/RVA experience with Net Capacity Load before going that route. All
the volumes in the pool are 3390-A (any size you want) so the Dynamic Volume
Expansion method (DVE) is a simpler route than over provisioning. It's a
simple thing to create a bucket load of standard size volumes in the pool -
15 minutes in the Storage Navigator GUI to create a couple of thousand
volumes. (I love this in the lab).

>From the small amount of info you gave it sounds like an investment in some
FICON Boards, a few spindles and the MF-HDP setup and you could be up and
running on the VSP. If there's any concern about running open and Mainframe
in the same Global cache you can carve out a Cache Logical Partition (CLPR)
for the CKD Parity Groups and the dark side will be none the wiser.
Personally I don't see a problem with this, but the first four CLPR are free
and it gives some people nice warm and fuzzy feelings. Just make it equal to
the cache in your current HDS Controller and you'll be good.

With dedicated VSD, CLPR, Channels and RAID Group it's a bit like a condo
for MF and the dark side. You get your own apartment, kitchen, and bathroom,
but you have to share the corridors, lift and stairs :-)

And you may want to check your current disk activity rates before jumping
straight into 600GB drives. You don't want reduce 64x73GB HDD to 8x600GB HDD
and find you're stalling because you need to handle 2000 read cache misses a
second. SSD would make that no-brainer, but they are expensive.

Good Luck

Ron




> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
Behalf Of
> O'Brien, David W. (NIH/CIT) [C]
> Sent: Thursday, January 26, 2012 7:53 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: [IBM-MAIN] 3390s on SAN?
>
> There is an internal proposal to carve several TB of dasd from one of our
non-
> mainframe depts. And use it to replace our aging HDS DASD.
>
> Question: How easy/difficult is this to accomplish?
> We re-configured an array from 3390 mod-3s to mod 27/50s but the entire
array
> needed to be cleared of data. I'm assuming the same will be true in this
case.
> I'm also assuming that the disks will need to be re-modeled (I am probably
not
> using the correct terminology) to be mainframe compliant.
> Are my assumptions correct?
> Thank You,
> Dave O'Brien
> NIH Contractor
>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
to
> lists...@bama.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

Reply via email to