The difference between FLIH and SLIH is basically packaging. In general, a FLIH 
saves context before calling an  SLIH. It may also obtain a local lock. AFAIK 
only External and SVC interrupt handlers have the distinction.

32-bit mode is problematical because BXH and BXLE operate on signed operands. 
It would also break the OS convention for end of parameter list.

I don't see how the decision to go with 31 bit versus 32 bit addressing has any 
relevance to the decision to go with 63 or 64 bit addressing. 


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


________________________________________
From: IBM Mainframe Assembler List <[email protected]> on behalf 
of Keven <[email protected]>
Sent: Thursday, September 3, 2020 5:19 PM
To: [email protected]
Subject: Deep cuts






        Couple of things...
1) In MVS et. al., what characteristics differentiate a First-Level Interrupt 
Handler  from a Second-Level Interrupt Handler?  Is the transition from FLIH to 
SLIH determined by say, a state change due to LPSW/E?  Maybe transitioning from 
first-level to second-level is indicative of the interrupted 
dispatchable-unit’s state data having been copied to its own control blocks 
(TCB/RB, SRB etc.) from the processor’s (PSA, LCCA/PCCA etc.)?
2) When IBM S/370 engineers decided that, okay maybe 16MiB of addressable 
storage isn’t enough after all but there’s no way y’all will ever need 4-GiB  
(I mean, c’mon) so we’re making our new XA architecture with a (31-bit) 2-GiB 
address space, I wonder, had they instead made XA a 32-bit system would that 
mean that z/Architecture would have to be limited to 63-bits in order to 
provide compatibility between (what would have been) 32-bit 390/ESA and 63-bit 
z/Architecture in a manner analogous to how compatibility between 24-bit S/370 
and 31-bit 370/XA systems was implemented?  Would (24, 32, 63) have been a 
better bit-size expansion sequence than was (24, 31, 64)?

Keven





Reply via email to