> But does it actually work with Linux? Both Linux systems re
> caching data
> off the disk. I've not heard that _Linux_ supports it.

Take another look at my note.

Linux is *not* aware of it. CSE does not solve the caching problem, nor
does it attempt to do so. That's why I mentioned GFS, which includes a
IP-based signaling protocol to hand R/W access back and forth to
different systems. RIght now, GFS does it's own thing, mostly with
SCSI-based disks, but some IP-based stuff -- if we could borrow the IP
signaling from GFS and let CSE do the actual work of moving the data,
that would be a very interesting implementation, and it would be device
neutral.

What CSE *does* do is allow you to build a solution that allows a set of
guests using another clustering technique or file system to split the
cluster across possibly physically distinct hardware and ensure that if
a physical node fails, the service can be brought up on another node in
the complex w/o having to recable things or do a lot of reconfiguration.
You autolog the userid on another machine in the complex and everything
comes up normally. You also gain syntax extensions to the normal VM
commands to execute queries and commands on specific nodes (eg AUTOLOG
FOO ON SNAVM4 with CSE active starts user FOO on node SNAVM4).

That's why I said it's a *start* for building a HA solution. There's
more to be done, but CSE does a lot of fairly nasty 390-specific stuff
with disk sharing that Linux doesn't know how to do (or need to know how
to do, IMHO).

If necessary, Linux can interact with this using 'hcp', just as it does
with other CP system services.

-- db

Reply via email to