I know you are a smart guy Rob, but I beg to differ with you on this point.
At least where VSE is concerned.
I have done this and can reproduce results at will (that is if I stilled
worked there).
Environment:
4 CPU z890.
8 gig real memory.
z/VM 5.4
7 production VSE's
VSE 2.7
z/VSE 3.1

With VSE's using only 1 CPU (non-dedicated) I carfully selected a one hour
job mix.
Giving VSE's 4 CPU's (non-dedicated) the same job mix ran close to 1hr
20min...
This is wall clock time, which in final analysis is the only one that
counts.




On Wed, Sep 29, 2010 at 11:16 AM, Rob van der Heij <
rvdh...@velocitysoftware.com> wrote:

> On Wed, Sep 29, 2010 at 7:30 PM, Frank M. Ramaekers
> <framaek...@ailife.com> wrote:
> >
> > This was stated on the z/VSE LISTSERV, can someone confirm (or deny) it?
>
> > Here is a quick tip. When running under VM with multiple VSE's it is
> usually NOT a good idea to define multiple CPU's to VSE and expect turbo
> dispacher to handle them. Why? Because z/VM will not dispach a VSE unless it
> has ALL requested CPU available. Often VSE could be running but is waiting
> for z/VM to find a secind free CPU.
>
> As stated here, we can simply conclude and demonstrate that this claim
> is not true. The more interesting part is to understand which
> statement *is* true and how that led to this rumor ;-)
>
> In general, it's a bad idea to have more virtual CPUs than you can get
> from z/VM when you have workload to use them. The total number of
> logical CPUs in z/VM is an upper bound for what you can get, but when
> you run 100 Linux guests on 5 IFLs, it's unlikely you find a guest
> have all its 5 virtual CPUs dispatched at the same time.
>
> One of the challenges with virtualized multi-processor guests is about
> locking. When the virtual CPU holding the lock is not dispatched, the
> other virtual CPU ends up spinning waiting for the other virtual CPU
> to free the lock (which does not happen because you're burning a CPU
> spinning). To avoid that, the guest OS uses a "voluntary time slice
> end" DIAG44 to give up running and expect the other virtual CPU to get
> time to free the lock. Linux is even using a later version of that to
> tell z/VM which virtual CPU should be put in front of the queue (with
> more than 2 virtual CPUs "other" is a bit vague). I don't know how
> much locking is done in VSE.
>
> Another aspect is about SMP. Linux is "symmetrical" and does not care
> which virtual CPU runs what. Some Operating Systems deal with
> serialization by "master only" tasks. z/VM used to have a lot of that
> in ancient past, and got rid of almost all now. When the guest OS
> needs some work to run on one particular CPU (the master) but
> dispatches work on both virtual CPUs, you can't pick which one is
> dispatched first by z/VM. The question would be whether VSE has much
> master only work.
>
> Rob
> --
> Rob van der Heij
> Velocity Software
> http://www.velocitysoftware.com/
>

Reply via email to