<snip>
is there any logic behind why MVC uses the actual byte count and MVCSK uses the 
'number of bytes to the right'?
</snip>

As Wayne D pointed out, for MVC the user codes "n" and the instruction text 
uses "n-1". This is almost certainly for effectiveness. Having 8 bits of 
instruction text lets you cover a range of 1-256 bytes instead of 0-255 bytes. 
You would have been very unhappy if you could only move 255 bytes with a single 
MVC if moving a long string (particularly in the days before MVCL). FWIW, this 
is why if you EXecute an MVC, the value you put into the register is "n-1".

For MVCSK/MVCDK, the user does not code a length, and the length is not in the 
instruction text. The length is in a register. So the user puts the value there 
by a separate instruction.

So both actually use "number of bytes to the right" (or, as I think of it, 
"length minus one").

You could ask "for MVCSK/MVCDK, since the length is in a register, why did you 
go with n-1 in the register instead of n?". I don't recall exactly but it was 
likely for either (or both) of consistency with MVC (such as the execute case) 
or for cost savings (perhaps being able to share part of the implementation).

As to the initial question, as pointed out, it depends on the type of the SVC. 
The SVC owner knows what type it is (because they defined it) and can look in 
the right place for that type of SVC, just as they look in the right place for 
the caller's regs (for which the answer is different than the psw/key, but 
similarly depends on the type of the SVC.

Peter Relson
z/OS Core Technology Design


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to