Are there any performance implications with doing it this way as opposed to 
HiperSocket directly to each guest?

Yes - I'll leave it to others to quantify it exactly, but it will use a 
non-zero amount of 390 CPU to do the packet forwarding between interfaces. 
Since there is no external connection to the hipersocket, you can't offload the 
routing or switching to a non-390 CPU.   (Another reason IBM should have 
convinced Cisco/Nortel to produce a bus-attached router similar to the chassis 
router module they developed for the BladeCenter to put in a Z. )

Another option is to use a dedicated 10G OSA for this traffic on both LPARs and 
connect the two physical ports to an external dedicated switch. That has a much 
smaller internal CPU overhead, but it's certainly not cheap.  The CPU used to 
drive the adapters and do the data moving is accounted against CP, not a 
individual virtual machine, AFAICT.

Hipersockets  (and any attached device strategy) doesn't work well for massive 
scale. You just can't install enough of them for a big farm, so you have to 
start using virtual tricks.

You're probably on a z10, so here's another idea - try defining a L2 VSWITCH 
using a hipersocket device - I faintly remember reading somewhere that 
hipersockets got L2 capabilities at some point. I don't know if it'll 
work(never tried it), but if if will, then use that instead of individual UCBs 
attached to guests. Define 2-3 UCBs to the VSWITCH just in case (although if a 
HS device fails, you're already in deep something), and use VLANs to separate 
the traffic.


n  Db

n

Reply via email to