Hi, when I start OpenDX on a shared-memory machine (eg. SGI Origin, IBM SP3, Linux-SMP) I see in the startup messages that multiple worker processes are started by default (typically two - this can be changed using the '-processors' option). I'd like to make use of those when running a given DX network. My question is: how do I do that ?
I realize that many (which ones ?) of the DX modules are parallelized already, and that I can use the Partition module to split a single dataset into chunks for parallel processing. I tried this with a simple example of calculating isosufaces from a big dataset, and it worked fine when running it from within the VPE. However, when I then tried to run the saved *.net file directly from the command line in script mode it failed with this error message: 1: worker here [7364] 0: worker here [7365] child process 1 (7364) exited, status = 0 -1: cleaning up and exiting parent exiting Is there something special I have to do when running a DX network in script mode on a parallel machine ? Apart from the intra-module parallelism I'd like to know whether OpenDX is also able to run independent data paths in a given network in parallel. My example: I have written an import module to read a remote HDF5 file. In my visualization network I need to import data from three different HDF5 files. So I'd like to run my three import modules independently and in parallel in order to hide the latencies for the remote file access. I tried to use exexution groups but didn't get very far with this. Specifically I didn't know what hosts (other than 'localhost') I should assign my execution groups. Are execution groups meant to exploit an SMP machine ? Thanks in advance for any hints and comments ! Ciao, Thomas. -- ========================================================================= Thomas Radke Max-Planck-Institute for Gravitational Physics, Albert-Einstein-Institute Am Muehlenberg 1, 14476 Golm, Germany fon +49 331 567-7329 fax +49 331 567-7298 http://www.aei.mpg.de/~tradke =========================================================================
