The part of offloading cycles to a cheaper platform, is that we would
be offloading to a more expensive platform (intel).  Not that the Intel
box isn't cheap, but the economic reasons for server consolidations is
to get away from these "cheap" boxes.

Until a few months ago, I've had the impression that putting cpu type
loads on the mainframe wasn't economical compared to putting the same
loads on Intel or Sun platforms.

But then I start hearing about some other sites, one that had 7 Linux
images in LPAR mode, using 9 processors.  Apparently, it was
economically justifiable.  I still don't understand how.  But it did
open my eyes to "run the numbers" instead of throwing it out just based
on an outdated "rule of thumb".

I'm sure there is some room for Intel based cheap mips, but in todays
world, I would have to see it to believe it.  Next year is a different
story.

Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 06/18 12:17 PM >>>
Greetings.

As a newbie to the mainframe environment (my background is mostly
linux), I have grown enthusiastic about this superior hardware I knew
very little about. Nevertheless, I have always found it a shame that
number crunching workloads are not a good match to the mainframe.

Grid computing is interesting as a way to make the best of the cheap
computing power provided by intel boxes, on the one hand, and the
robustness of the mainframe, on the other, opening new avenues for
integrating and using various resources with their own strenghts. If i
got it right, it seems that applications need to be grid-aware to be
able to use it effectively, which makes it a no-no as a short-term
solution.

And then I had this idea when I was reading about openMosix.
For those of you who haven't heard, check the homepage at
http://openmosix.sourceforge.net/.
In a nutshell, openMosix is a single-image clustering system
implemented as a Linux kernel extension and a set of userland tools. You
connect multiple IA-32 boxes with a patched kernel and get a linearly
scalable cheap supercomputer. Users treat it like a single machine, as
processes are migrated to idle(ier) nodes transparently.

So what if we could patch a zLinux image kernel and then made it one of
the nodes of one of these clusters? If possible, we would have a way to
cleanly offload CPU intensive jobs from the linux/mainframe to cheaper
external engines.

This would get cheap horsepower to the mainframe, transparently, and
would still allow for centralized management (filesystems could still
reside on DASD). I can think of at least one disadvantage. If an
external node breaks, any processes it is running at the moment will be
lost, which wouldn't happen on a zLinux image, as far as I know.

Any mainframe and VM gurus care to comment? Is there any reason why
this can't be done? Do we loose any more reliability features? Am I
missing something that makes it totally impractical?

Thanks for your patience :)

-- jmc

Reply via email to