On Wednesday, 03/28/2007 at 04:17 AST, David Boyes <[EMAIL PROTECTED]> wrote:
> I think you're conflating the CP interface and how the data passed > through the interface is generated/consumed here. I'm concerned with the > CP interface here, which is currently a tedious, complex, > hard-to-maintain piece of code that requires an enormous amount of > bucketing around in the CP source to understand and maintain. I'm > arguing that it makes no sense to have multiple people invent that wheel > separately; a clearly defined interface point that can be used by > multiple tools w/o modification benefits everyone, and directly impacts > the development cost of your tools and your environment in a positive > manner -- crudely put, it makes it cheaper to support the really > valuable bits: the management tools, which is where the real money is. I should point out that the *RPI service, while defined by CP, is really internal to the ACI. The internal calls CP makes to the ACI don't really care what happens to the data. It doesn't even have to communicate with the *RPI system service. In fact, without changes to the ACI, the *RPI service is moribund. Nothing will ever reach it. Further, CP will immediately sever an attempt to connect to *RPI. Why? Because vanilla CP doesn't know how to talk to it. He just provides a data pipe. In recognition of this fact, all of the CP calls to the vanilla ACI will answer "DEFER". What all this means is that the design of the ACI is wide open. If you want to provide a set of ACI modules that work with your Linux guest, that's great. The data flows across *RPI do not necessarily have to match the ACI flows into and out of HCPRPIRA. It is the latter that are architected, not the former. You can even flow ASCII. :-) It is true that the Big Four ESMs all use a set of core services surrounding *RPI. The reason for that system service name is because it is tied by CP (outside of the ACI) to the ACI. The Big Four each have a set of CP-resident services, adding local caches, private notifications (imagine how SETRACF INACTIVE works), whatever their hearts desire. These services provide value, particularly cache management. So, the server that connects to *RPI is just one half of the equation. The CP-resident code is the gateway/translator between the proprietary ESM and the rest of CP. RACF, for example, can go into failsoft mode where it will happily annoy the operator with "Allow this?" messages to handle situations where the server is down. To have a common CP-resident ESM handler is certainly possible, but it would change the CP ACI architecture, moving the interface to *RPI. That is, IMO, too restrictive and prevents useful optimizations. > It strikes me that for IBM as the OS developer it's counterproductive to > have everyone re-implement the same function in several incompatible > ways. If we were starting anew, I might agree, but we're coming up on 30 years and it is what it is. The money was spent. The milk was spilt. And the water is under the bridge. The ante for a z/VM ESM includes a CP piece and a server. > That's like telling someone that they have to make their own > garden hoses because the iron casting industry can't converge on a > standard fitting. It's time to start making standard fittings for these > CP functions, and this would be one place that has needed a standard for > several decades now. It has a standard, David. It always has. You just don't like the standard. > This is particularly visible as the number of people with CMS experience > decreases -- the SES incantations necessary to get an ESM functioning in > a maintainable manner are frankly beyond the VM newbie. Even with > services hand-holding -- which costs money to administer and provide -- > this is a knot that needs cutting. You moved the cheese are talking now about usability, not architecture. It is *not*, in fact, beyond the newbie because we have newbies who are implementing z/VM for the first time and successfully adding ESMs of their choice on those systems. But I wholeheartedly agree that the entire process could be made easier. I know that if I had it all to do over again, I'd definitely do some of it differently. > If it allows the existing CMS clients to connect to Linux in place of VM > TCPIP, then it's what I want. You can already do that for some of the clients/servers. The C-based programs all call LE sockets. LE calls BPX1SOC CSL routine. Feel free to write your own replacement that implements the transport of your choice. As far as the Pascal programs go, I don't think the VMCF interface will survive into the 22nd century. ;-) > OK. Better yet, just replace the whole SFS lashup with Linux-based > Lustre or GPFS file stores and use the CMS NFS client to work with that. > It might be interesting to see if the NFS client code could be adapted > to use IUCV rather than TCP...hmm. Remember that TCP *is* IUCV with metadata. Add TYPE=2WAY to AF_IUCV in Linux and you can implement your own VM TCP/IP stack/gateway. Swim *downstream*. It's easier. Upstream the grizzly bears await... :-) Alan Altmark z/VM Development IBM Endicott ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
