> On Aug 17, 2015, at 4:57 PM, Alan Altmark <alan_altm...@us.ibm.com> wrote: > > On Monday, 08/17/2015 at 03:04 EDT, Grzegorz Powiedziuk > <gpowiedz...@gmail.com> wrote: >>> On Aug 17, 2015, at 11:10 AM, Alan Altmark <alan_altm...@us.ibm.com> > wrote: >>> >>> The limit of 32 is on the number of active NPIV subchannels, not the >>> number of active connections being used by a given NPIV sub channel. >> >> active NPIV subchannels - what exactly does it mean? If NPIV device just > sits >> there in z/VM as ?FREE? not attached to anything, is it considered to > be >> active NPIV sub channel? I guess it is but I want to make it clear. >> If yes than I shouldn?t be doing what've asked about and I have my > answer. > > An active NPIV subchannel is one that is attached to a guest and is being > used, or one that is part of an EDEVICE.
Sweet! > >> I thought that EQUID is there so the z/vm knows which FCP device it > should pick >> from a target system when machine is being LGR-ed. I am not sure if I >> understand what you meant with overloading or not overloading. I mean > without >> EQUID it wouldn?t even LGR a virtual machine. Can you elaborate that? > > The EQID allows CP to select an available equivalent device from the > target system, where "equivalent" is defined to mean "with the same EQID > and device type”. > > Let's assume that you have > a) Ten (10) FCP paths on system A and B, and each path has 100 > NPIV-enabled subchannels defined on it, for a total of 1000 NPIV-enabled > subchannels. > b) 100 guests, each using two NPIV-enabled FCP subchannels, for a total of > 200 active subchannels. > c) You are using seven of your ten FCP paths since you are placing no more > than 32 active subchannels on each path. > > With me so far? > yes! > Now, if you use EQID PURPLE on those 200 subchannels (on both A and B), > you will discover that each path on B will fill to its capacity (100) > before CP moves onto the next. That means that if you relocated all 100 > guests, CP will be using exactly two paths on the target system. > > This behavior is a by-product of the fact that devices with the same EQID > are consumed in device address order. No load balancing is performed. > Oh I see now. > Oops. > > So to address this, make sure that the EQID assigned to the subchannels on > an FCP path is unique to that path. This will keep all the PURPLE > subchannels on a single chpid, all the RED ones on another, and all the > BLUE ones on a third. (You might consider using an EQID like FCPnnn, > where nnn is a sequence number that you assign as you consume FCP paths. > Got it! Thank you so much for spending time on explanation. That definitely helps. The bottom line is - I can “over provision” (or kind of thin provision) fcp channels as long as I keep below 32 attached paths per chpid and make sure that I do EQIDs vertically only - separate EQID for every set of paths on two chpids (one on each system). Best regards Gregory Powiedziuk > Alan Altmark > > Senior Managing z/VM and Linux Consultant > Lab Services System z Delivery Practice > IBM Systems & Technology Group > ibm.com/systems/services/labservices > office: 607.429.3323 > mobile; 607.321.7556 > alan_altm...@us.ibm.com > IBM Endicott > > ---------------------------------------------------------------------- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > ---------------------------------------------------------------------- > For more information on Linux on System z, visit > http://wiki.linuxvm.org/ ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 ---------------------------------------------------------------------- For more information on Linux on System z, visit http://wiki.linuxvm.org/