And here's the 2nd part of the Jon/Chris magnum-opus... Jon >> 6. You said "IBM's C++ running APF authorized under LE in a z/OS STC". My design very rarely runs authorized and could easily Jon >> be written in any language. In your z/OS STC design, did you run authorized regularly? Init for SSI, PC routines, one CB in CSA, Jon >> system exits and start tasks. PC routines do the AR mode code. No SRB. No updates to CSA. What are you running that runs Jon >> authorized on a regular basis in your STC?
The new "product" is a data server that does z/OS job-monitoring on behalf of an enterprise workload management product that is hosted on Linux/Windows. The Linux "Master" (Java) manages the schedule and communicates (via TCP/IP) with a bunch of (Java) agent/gateways that perform job management functions on target systems including z/OS. The Master asks a java agent to monitor certain jobs. The agent asks the data servers on each z/OS image (also via TCP/IP) to monitor those jobs. The data server listens on the IEFU84 SMF exit. When it sees an SMF 30 for an interesting job, it copies data over into the data server, which sends a message (again via TCP/IP) to the Agent and thence to the Master. I designed and wrote the whole data server on my own. We had a 2-man team and the other guy did all the Java Agent work for z/OS. Other than needing to support the same external interface as the previous product, I had complete design freedom. The STC is an ordinary z/OS started task, but the job step program is a C++ main linked AC=1. The STC therefore runs APF authorized, but the C++ code runs entirely in key 8 and problem state under LE and with POSIX=ON. The C++ code is heavily multi-threaded, with a real TCB per thread, but that's all hidden inside the pthreads library and the wrapper class I mentioned above. When the C++ code needs to interact with low-level z/OS functions, it calls ordinary HLASM functions (also under LE) which do whatever they need to do. I did not use any inline assembler in the C++ code. This is really a piece of systems software. It has ECSA control blocks, stacking PCs, RESMGRS, Dynamic Exits, Xmemory AR-mode etc. Most of the HLASM code runs on my own home-grown supervisor state runtime because LE can't handle that. The entire server runs in 31-bit mode with standard linkage (yes, really) because I didn't need 64-bit and I didn't want to deal with the <ahem> "awkwardness" of dealing with 64-bit C++. It's built to be blazing fast and it is. It's all reentrant and exploits knowledge of z/Architecture cache behavior and memory consistency rules etc. to ensure data never ends up in the I-cache and also to eliminate contention between worker threads etc. The worker threads never even touch the same cache line. It's safe to say this isn't a typical C++ application. Jon >> 8. You said "the C/C++ compiler and libraries exploit the living shit out of the z architecture instruction set". Jon >> It was wrong for me to say 5% because it's obviously too low. What unexpected instructions did you see? Jon >> I suspect the new instructions you saw were added specifically for the C/C++ compiler (e.g. string instructions) which makes sense. The business case for "new" instructions is (probably always was) for compiler exploitation. Even the 31-bit non-XPLINK generated code would look pretty alien to most HLASM programmers. You can see what it's doing, but between the obscurities of C++ classes and the instructions the compiler emits, I mostly just went "ok, I'm not man enough to get deep into those weeds". Fortunately, I never really had any problems other than gritting my teeth over some of their compiler/library implementation choices. BTW> I used the z/XDC debugger with c/XDC for the C++ code. I'm still a Dave Cole fanboy. I couldn't have done it without XDC. CC
