I guess I answered my own question concerning the FLIH/SLIH transition. 
 That is, it’s the point where status indicators for the interrupted work are 
copied from processor-specific to dispatchable unit-specific control blocks and 
control passes from one to the other (presumably via LPSW/E) and interrupt 
processing continues under a dispatchable unit which allows further interrupts 
to be handled.  Although SVC and Program interrupts can’t be disabled the FLIHs 
check themselves for recursive entry (but I think SVC 13 is always honored).
As far as the 31/32-bit thing goes, my assumption was that bit-31 of what would 
have been the 24/32-bit PSW address would have been used to indicate AMODE 
instead of bit-0 in the case if the XA 24/31-bit PSW.  That being the case, 
bit-0 of the z/Architecture PSW would have to be used to indicate AMODE instead 
of bit-63.
K3n
        
        

        
    
  




On Thu, Sep 3, 2020 at 11:15 PM -0500, "Seymour J Metz" <[email protected]> wrote:










The difference between FLIH and SLIH is basically packaging. In general, a FLIH 
saves context before calling an  SLIH. It may also obtain a local lock. AFAIK 
only External and SVC interrupt handlers have the distinction.

32-bit mode is problematical because BXH and BXLE operate on signed operands. 
It would also break the OS convention for end of parameter list.

I don't see how the decision to go with 31 bit versus 32 bit addressing has any 
relevance to the decision to go with 63 or 64 bit addressing. 


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


________________________________________
From: IBM Mainframe Assembler List  on behalf of Keven 
Sent: Thursday, September 3, 2020 5:19 PM
To: [email protected]
Subject: Deep cuts






        Couple of things...
1) In MVS et. al., what characteristics differentiate a First-Level Interrupt 
Handler  from a Second-Level Interrupt Handler?  Is the transition from FLIH to 
SLIH determined by say, a state change due to LPSW/E?  Maybe transitioning from 
first-level to second-level is indicative of the interrupted 
dispatchable-unit’s state data having been copied to its own control blocks 
(TCB/RB, SRB etc.) from the processor’s (PSA, LCCA/PCCA etc.)?
2) When IBM S/370 engineers decided that, okay maybe 16MiB of addressable 
storage isn’t enough after all but there’s no way y’all will ever need 4-GiB  
(I mean, c’mon) so we’re making our new XA architecture with a (31-bit) 2-GiB 
address space, I wonder, had they instead made XA a 32-bit system would that 
mean that z/Architecture would have to be limited to 63-bits in order to 
provide compatibility between (what would have been) 32-bit 390/ESA and 63-bit 
z/Architecture in a manner analogous to how compatibility between 24-bit S/370 
and 31-bit 370/XA systems was implemented?  Would (24, 32, 63) have been a 
better bit-size expansion sequence than was (24, 31, 64)?

Keven

Reply via email to