Greetings.

As a newbie to the mainframe environment (my background is mostly linux), I have grown 
enthusiastic about this superior hardware I knew very little about. Nevertheless, I 
have always found it a shame that number crunching workloads are not a good match to 
the mainframe.

Grid computing is interesting as a way to make the best of the cheap computing power 
provided by intel boxes, on the one hand, and the robustness of the mainframe, on the 
other, opening new avenues for integrating and using various resources with their own 
strenghts. If i got it right, it seems that applications need to be grid-aware to be 
able to use it effectively, which makes it a no-no as a short-term solution.

And then I had this idea when I was reading about openMosix.
For those of you who haven't heard, check the homepage at 
http://openmosix.sourceforge.net/.
In a nutshell, openMosix is a single-image clustering system implemented as a Linux 
kernel extension and a set of userland tools. You connect multiple IA-32 boxes with a 
patched kernel and get a linearly scalable cheap supercomputer. Users treat it like a 
single machine, as processes are migrated to idle(ier) nodes transparently.

So what if we could patch a zLinux image kernel and then made it one of the nodes of 
one of these clusters? If possible, we would have a way to cleanly offload CPU 
intensive jobs from the linux/mainframe to cheaper external engines.

This would get cheap horsepower to the mainframe, transparently, and would still allow 
for centralized management (filesystems could still reside on DASD). I can think of at 
least one disadvantage. If an external node breaks, any processes it is running at the 
moment will be lost, which wouldn't happen on a zLinux image, as far as I know.

Any mainframe and VM gurus care to comment? Is there any reason why this can't be 
done? Do we loose any more reliability features? Am I missing something that makes it 
totally impractical?

Thanks for your patience :)

-- jmc

Reply via email to