When you Image and the OpenDX UI, the server receives the X window ID of
the image window which is created by the UI. It then either writes pixels
into it, if "software" rendering is used, or overlays it with (yet another)
X window and uses GLX to render into it if in "hardware" rendering mode.
All of this takes place in Display; the window ID is encoded in the "where"
parameter, normally hidden and automatically set by the UI. An X window
can also be created by SuperviseWindow, running in the server process,
which in OpenDX is referred to as the "exec". SuperviseWindow produces a
"where" parameter that can be sent to Display. It also can receive a
"where" parameter, in which case it creates a window that is a child of the
window indicated by the input "where" parameter. If Display does not
receive a "where" parameter (for example, in script mode when there is no
UI process), it creates its own window.
All of this points up one of the difficulties with reproducing OpenDX in
the native-Windows world. In X, one process can access windows owned by
another; in Windows, this isn't the case. This wrecks the OpenDX
client-server model. Instead, the exec has to be part of the process that
creates the window. In my playpen version of the native-Windows executive,
I run the exec as a set of threads. This allowed me to create a UI in a
wrapper process (like an ActiveX control) and then have the OpenDX threads
render to it. Unfortunately, that tweaked a *lot* of code, and is based
on a very old snapshot of the code, so I can't bill it as much more than an
experiment. I believe there is work under way to create a "real"
native-Windows exec. I've lent some assistance. I'm not sure what plans
there are for a native-Windows UI.
In another long-gone experiment, I dovetailed OpenDX and BuilderXcessory, a
fairly standard X GUI builder. It was pretty cool; you could create a UI
interactively, then put BX into "test" mode (or is it "play"?) to run the
UI, and if you set it up right, OpenDX would be active. There was a
special widget in there to manage OpenDX, and that gave you the ability to
pop up the OpenDX UI so you could use its VPE to create and modify the net.
You could place X widgets in BX, then use C code in callbacks to cause them
to talk to the OpenDX process. The trick was to separate the
net-building from the GUI-building (and running). Saves a tremendous
amount of work, since the VPE is very complex, and is very intertwined with
the run-time. Originally, the idea was that the net itself was meant to be
part of the interactive UI; just like you might change a slider to
reparameterize the visualization, you should be able to pop up the VPE and
rewire it. Thats probably not really the common case; most people want to
build apps, but we didn't know that at the time.I would love to see a UI
based on a cross-platform UI builder like you mention. I'd be happy to
discuss it and help.
Greg
"John R. Cary"
<[EMAIL PROTECTED]> To:
[email protected]
Sent by: cc:
[EMAIL PROTECTED] Subject: [opendx-dev] OpenDX
architecture
son.ibm.com
01/19/2003 12:26 PM
Please respond to
opendx-dev
I would like to understand the OpenDX architecture
better. I understand that it is client-server. I
understand that the client generates the network
file that is then sent to the server over port 1900.
But what happens when the server is asked to render
something, e.g., through Image? Does it send the image
back through port 1900 to the client, and then the
client puts it up? Or does is put up an X window on
the client and render into that window? Or something
else?
Thanks....John Cary
--
John R. Cary
Professor, Dept. of Physics, University of Colorado, Boulder, CO 80309-0390
[EMAIL PROTECTED]
ph. (303) 492-1489 fax (303) 492-0642 cell (720) 839-5997