IODF devices for FCP do not point to disks - they are simply a hole to dump
frames into the network and pull frames out of the network with.  Its a lot
more like OSA devices than DASD.   the OSA device is not an endpoint in the
IP network that you are talking to, and neither is the FCP device in the
IODF.

you add device 4000 on chpid 40 , and 4100 on chpid 41 to your IODF as you
have drawn. each chpid plugs to a different switch, and each of those
switches in turn plugs to the storage device.

When you configure devices 4000 and 4100 online in linux, no storage
appears. All that happens is that the 2 pchids log in to the switch that
each is plugged to.
Once the devices have logged in to the switch - the SAN admin can create a
zone that permits the WWPN of the Z adapter to talk to the WWPN of the
storage box. A WWPN is kinda the SAN equivalent of an IP address - it
uniquely identifies this endpoint to the storage network. The FCP devices
on the Z have WWPNs, and the storage device ports also have WWPNs, and when
you create a zone in the storage network with one each of Host WWPN and
Storage WWPN the SAN will permit those two endpoints to talk to each other.
This is very different from FICON CUP network management where you permit
port 23 to talk to port 4D - the FICON method is a switch-port to
switch-port access control method. It does not care what is at the other
end of the port, just that this port can talk to that port - in the
switch.   A SAN WWPN zone is that "this end point WWPN" in the network can
talk to "that end point WWPN" in the network - no matter what physical
switch ports there are between them.

Once the zone is in place and active the linux OS will see the storage
controller - and now in the storage controller you have to tell it about
the WWPNs of the Linux host and what virtual disk it is allowed to use.
This is called lun masking or mapping, depending on whos storage you're
using.  Once you create a host definition on the storage controller that
lists the WWPNs of the Linux FCP devices, and maps a LUN to that host
definition, Linux will see scsi disk devices appear on the FCP channels.

you will see one /dev/sdXX device appear for each path to each lun.  for
example 1 LUN with 2 paths will show you /dev/sda and /dev/sdb.   The Linux
OS can see both paths to the disk - and now you need to enable the linux
multipath driver to manage those paths for you. It will create a new
/dev/mapper/<big_long_uuid> device for you to format or use for LVM or
whatever and the driver will handle spreading IO across /dev/sda and
/dev/sdb paths for you, as well as path recovery if you take a switch down
for maintenance.

ref:
https://www.vm.ibm.com/education/lvc/LVC0924.pdf?cm_sp=dw-dwtv-_-linuxonz-_-PDF-for-3rdpartyhost-videos%20PDFs

On Mon, Feb 7, 2022 at 5:37 PM Davis, Larry (National VM Capability) <
[email protected]> wrote:

> Cross posting to Linux for s390 and z/VM List Serves
>
>
> We are looking at using a SAN and connect our z/VM system to it to allow
> our Linux servers on z/VM to use it
>
> We are having questions on Connectivity in the IOCP to backend LUN's in
> the SAN
>
> For FCP Channels you are not allowed to do Multi-Pathing like we normally
> see on the Mainframe by having a control unit have multiple Path to the
> same device.
>
> If this is correct then two  SAN switch's that connect to the SAN disks
> can map to the same LUN in the back end
>
> Using this will allow 2 separate device addresses to be created that point
> to the same backend LUN correct?
>
>
> Then Linux Multi-pathing  will see 2 separate device addresses as a single
> LUN and associate both addresses to the same SAN Device
>
> CHPID 40 (CSS0) -----> Switch A ----->        ----------------------
>
>       | SAN  Devices   |
> CHPID 41 (CSS0) -----> Switch B ----->        ----------------------
>
> CHPID 40, CU 4000, devices 4000-401F
> CHPID 41, CU 4100, devices 4100-411F
>
> If the Zone Fabric relates all the out link switch ports to the same
> xx00-xx1F devices in the backend, does this allow for Linux Multi-pathing
> and eliminate the single CHPID point of failure
>
> Are there any documents that will clarify this possibly for me?
>
>
> Larry Davis
> Senior z/VM Systems Architect,
> Leveraged Mainframe Team
> DXC Technology
>
> T +1.813.394.4240
> [email protected]<mailto:[email protected]>
>
>
>
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>


--
Jay Brenneman

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to