S/390: The Linux Dream Machine

By: Scott Courtney
Wednesday, February 23, 2000 09:19:48 AM EST
URL: http://www.linuxplanet.com/linuxplanet/reports/1532/1/
Linux Everywhere: More than a SloganLet's play a word association game, shall we? The first word is
"mainframe."
Many Linux enthusiasts were born and bred in an era of PCs that
are already fast and getting faster, cheap and getting cheaper. When you can
buy a two-way Alpha box for five thousand dollars or a high-end Intel for even
less, even administrators of large-scale servers are reluctant to spend seven
figures on Big Iron from Big Blue. Besides, there's that whole legacy thing
going on--you know, COBOL and RPG and punch cards. So when you saw that word
"mainframe," there's a good chance that some of the words that came
to mind were: expensive, cumbersome, proprietary, obsolete, DINOSAUR.
You would be wrong on the first four words, but right about
that last one: dinosaur. You were probably thinking of a lumbering,
walnut-brained lummox like Brontosaurus, though. Wrong dinosaur.
Instead, try Tyrannosaurus Rex. Mainframes today are like the dinosaurs
in Jurassic Park: they're back, and they're pissed!
Wait a minute--T. Rex is not quite right, either.
Let's say, a herd of forty thousand raptors. Later in this article, you will
understand why.
Taking A New Look at Big Iron
I first started working with IBM's mainframes in college in the
early 1980s. We had two systems then: MVS for running batch jobs from punch
cards and VM for running interactive programs on a terminal. I hated punch
cards (everybody did) but I was impressed by what you could do with MVS. The
Job Control Language (JCL), though extremely cryptic, was capable of magical
things if one only troubled to learn its secrets. I did so, and I think some of
the same things I like about Linux are things that I liked about MVS. I don't
care how user-friendly something is, but I demand tools that are
versatile and efficient. MVS is still with us today, although now it's called
OS/390 by IBM. Punch cards have been retired in favor of sequential files, but
the basics of JCL are still there. OS/390 is at the core of most of the world's
major corporations, and it has evolved more than I would ever have imagined.
I only got to play with VM, the "other" mainframe
system, my last two years of college. VM stands for "Virtual
Machine," as you might imagine. When I used it in college, I noticed that
each user had the illusion of his or her own machine. You had your own virtual
hardware of a mini-mainframe: card reader, card punch, console, printer, and
disk drives. It was cute, I thought, kind of a nifty little way to make the
mainframe approachable to new users.
It wasn't until a few years ago, working for a big company that
has both VM and OS/390 systems, that I began to understand that VM is much more
than a user-interface model. You see, when IBM says "Virtual Machine"
they really mean it--all the way down to the bare metal! It turns out that
IBM's System/390, or S/390, hardware (not to be confused with the OS/390
operating system) has specialized circuits (called PR/SM and SIE
in IBM parlance) in the CPU to allow it to virtualize itself. If you're an
assembly-language programmer in the Intel world, you may be familiar with the
"Virtual 86" mode that is present in the 386 and later processors.
V86 mode lets an Intel processor emulate the venerable 8086/88 chips that were
used in the days of DOS and Windows 2.x. The S/390 doesn't just emulate a
subset of itself, though, but rather the entire instruction set of the
latest CPU.
That word emulate really doesn't apply, either. The
S/390 actually virtualizes itself in hardware, meaning that applications run at
full native speed except for a very, very few privileged instructions that are
actually emulated by the operating system. Everything is virtualized, right
down to the hardware I/O addresses and memory map. It's like having NT's HAL
(Hardware Abstraction Layer) actually implemented in hardware. The VM operating
system takes full advantage of this by literally giving each user session its
own virtual machine, which has practically the same capabilities as the
hardware itself. So when a user logs in and thinks they have their own
computer, they practically do. The magic of virtualization is wrought by a
program called CP, or Control Program, which underlies the operating system to
manage the virtual hardware.
The virtual machines in VM are so good, in fact, that you can
literally boot and run VM as a "guest" operating system underneath
itself. Then you can boot another VM inside that one, and so on. They don't
even have to be the same version or even the same operating system. I know a
mainframe wizard who has booted a new (test) version of VM underneath a user
session on his production VM system, then loaded a complete OS/390 into
that virtual machine. When I asked him why he did it, he said,
"Because I can!" With VM, though, this isn't just a stunt, but
an everyday occurrence, used for testing new operating system versions and
patches without stopping production even for a minute.
New Life for Old MachinesIt turns out that, with the "PR/SM" feature mentioned
previously, the S/390 hardware can actually divide itself into units called
Logical Partitions, or LPARs. Say you've got an eight-way S/390 machine. You
want to bring up a test environment for a new version of the operating system,
to be sure it's compatible before you commit to migrating thousands of applications. You can dedicate two of those eight CPUs, for example, to make a test
machine that runs completely separately from the production machine except for
sharing power connections. The processors don't even have to be allocated in
whole units; you can prioritize and share them quite flexibly, depending on the
hardware configuration. Say you have a hardware failure (rare, but they do
happen) in one CPU. The failed CPU will generally be switched out and the
others continue to run without interruption. Maybe you have a business that
needs some applications on VM and some on OS/390. You can run both of
these on the same physical hardware, at the same time. As a matter of fact, you
could either use LPARs or just run OS/390 as a "guest" under VM,
depending on your needs. This old "dinosaur" is looking pretty
adaptable these days.
At the beginning of this article, I listed the words
expensive, cumbersome, proprietary, and obsolete and said they
don't describe big iron any more. Let's take a look at these, in order.
Expensive is a relative term, depending on what you are
trying to do. I'm not sure where the starting price is for an S/390, but by the
time you get a fully-configured system with disks, backup media, VM software,
service and support, and a secured room to house it, you're definitely talking
seven figures. That's a lot more than a five-thousand-dollar PC server from
Compaq or Dell, but price isn't the same as cost. It is not at
all unusual for one mainframe to support five thousand interactive users, or
more. That's simultaneously-connected clients, not just login IDs. Take two
million dollars (enough to buy a beast of a machine, I'm told) and divide it by
five thousand users, and the cost of $400 per user doesn't look so bad any
more. Add to that the fact that there is a big difference in the quality
and reliability of the hardware itself. A good PC server may only have one
hardware failure every thousand days. But if you have 100 of them, you will
have a failure somewhere every few days. A mainframe builder can afford to
spend more money on MIL-SPEC temperature ratings, higher-quality circuit
boards, careful cooling design, better cable construction, and top-grade
connectors. You get what you pay for.
There are also some low-end options in the mainframe world
these days, including used equipment, smaller System/390 boxes, and even a
single-board 390 CPU that plugs into a PCI slot. There is also competition from
third-party hardware vendors such as Amdahl.
Cumbersome is probably a fair description of some of the
really old software in the mainframe world. There are still FORTRAN programs
from the 1970s running production systems today, and literally billions of
lines of COBOL code still crunching along. Yet there are new tools on the
mainframe coexisting right alongside the old ones. IBM supplies Java runtime
and development environments as standard with their mainframes now. A wide
variety of other languages and tools, including C and C++ compilers, are
available from IBM and from third-party software companies. COBOL isn't the
same old language from the punch-card days, either. Extensions have allowed it
to keep pace with modern object-oriented languages and graphical user
interfaces. COBOL may not be the most intuitive language for beginners, but
neither are C++ and awk from the Linux environment. Yet all of these are
popular because they do what they were designed to do, and they do it well. Why
change?
One of the reasons that those "cumbersome" old
programs are still around is that mainframe programmers--like UNIX
programmers--are unwilling to throw away something that works and works well.
Advocates of Windows often point to Linux and decry it as nothing but a clone
of thirty-year-old engineering from UNIX. Most Linux users would point out that
this is intentional and that UNIX has hung around for thirty years
because it works and because its design has adapted to changing needs.
We in the Linux community point to Windows as an operating system that is so
immature its developers have to rewrite all their code every couple of years.
The mainframe takes the same philosophy as Linux--if it ain't broke, don't
fix it. The two environments have in common the fact that they place function
and reliability well ahead of form and glitz.
For years, IBM was criticized (rightly, in my opinion) for
their mainframes being proprietary systems. If you wanted to transfer a
file between a mainframe and anything else, it took a costly adapter card and
special software in the other system because the mainframe couldn't support
TCP/IP. All that has changed, now. IBM has caught the open systems fever in
a big way, and their mainframe systems now play nicely with the other children.
Both VM and OS/390 operating systems now support TCP/IP over high-speed
Ethernet and Token Ring, and they have a raft of TCP/IP utilities (FTP, telnet,
and so on) built in. DB2, IBM's industrial-strength relational database, can be
accessed using ODBC protocols over TCP/IP. There are several web servers
available for OS/390 and VM as well, and in fact IBM is now fully supporting
use of Apache on their mainframes.
The long and short of it is that, used in large-scale
applications, mainframes are cost-effective, flexible, and more open than ever
before. As companies move from internal client/server applications serving
thousands of users to external web sites serving millions, the mainframe of
today may in fact be the best damned web server you ever saw. Obsolete? No
way--they're just hitting their stride.
But It's Still Not Linux!If you've read all that came before, you have probably decided
by now that I am some kind of mainframe bigot, that I'm one of those people who
likes to sit in a glass room with a raised tile floor and sneer at the little
toy PCs. Actually, quite the opposite is true. I am a PC maven through and
through. I got started in computers by designing my own 8080A system and I will
never willingly give up the responsiveness, flexibility and control that I
have with a standalone CPU.
Besides, for all their awesome power and scalability, the truth
is that I don't like the command shells that are standard on either VM
or OS/390. VM has a thing called Conversational Monitor System, or CMS, and
OS/390 offers Time Sharing Option, or TSO. I've used both for years, and they
are immensely capable environments. I can't knock either of these for power or
quality, but their command syntax just doesn't feel as elegant to me as does
the UNIX/Linux shell. Call it personal preference, but I really love things
like command substitution and environment variables, and the fact that the
shell is itself a programming language so I can do for and
while loops interactively. CMS and TSO can run REXX programs or
scripts, but with bash or ksh you live inside a script
all the time. VM has a thing called "CMS Pipelines," and it's truly
awesome in what it can do. VM/Pipes is sort of like all of the standard UNIX
text commands (head, tail, cut, grep,
tr, and so on) rolled into one. But in my opinion it's not quite as
seamless as the simple use of the vertical bar in Linux to imply I/O
redirection. And I have never cared for the 3270 terminal's insistence on
processing data screen-by-screen rather than character-by-character. I know why
they do it--better system performance for data entry--but that doesn't mean
I have to like it.
Driving bash feels like being in a snappy European
sports car, while CMS feels like being in a solid, luxurious Lincoln Town Car.
I just happen to like the sports-car feel better; as I said, it's just personal
preference. But ever since I began to hang around with my company's mainframe
gurus, and truly begin to understand just what a powerhouse that big iron
really is, I've secretly longed for the day when I could have the best of both.
I want nothing less than the sheer might of a mainframe combined with the
nimbleness of the Linux shell. I thought it was in my grasp when IBM released
the UNIX System Services feature on VM Open Edition, which includes a
POSIX-compliant environment that runs inside VM. It was pretty good, but it
didn't quite measure up to my expectations for a variety of reasons.
The Genie AppearsAbout a month ago, our resident mainframe wizard came to see me
and said he needed some help with a Linux problem. I should point out that this
didn't surprise me. Several mainframe mavens that I know are very interested in
Linux and Java and other new technologies. I've found the mainframe crowd to be
much more open to new ideas than a lot of my PC-oriented colleagues who think
the world ends at the edge of their LAN. So when Ralph asked me for Linux help,
I assumed he had installed it on a spare PC to play around.
Wrong. He had installed it on the company mainframe.
He was quite smug about it, too, but in a good way. I couldn't
believe that it was really Linux, so I asked him if it was emulating Linux APIs
or if it was actually a native port. And if it was native, I supposed it was a
very old kernel because there must have been a monumental effort to port it. He
handed me the README printout and I started reading. After a minute or two, I
just looked up at him, and I grinned from ear to ear.
This was no emulation, but the Real McCoy! He booted it while I
watched, and I was amazed to see all the usual kernel and module initialization
messages flash by on a 3270 "green screen" terminal. When the login
prompt appeared, we did so and were running a full bash shell. I
immediately started poking around the filesystem, looking to see just how Linux
this "Linux" really was, still not quite believing what I was seeing.
It took only a few minutes to convince me that
this was no "lab queen" toy. The kernel level was 2.2.13--not
absolutely the latest, but near enough to be interesting. (I understand that
2.2.15 is out now.) All the standard filesystems were there and (after we
extracted a post-installation tarball) populated. The bash shell works
just as you would expect it to. Instead of a 3270 screen-at-a-time terminal
mode, you can telnet directly to Linux and enjoy the keystroke-level
responsiveness of any other Linux version.
We rebooted it a couple of times, tweaking
startup scripts and adding filesystems. Did I mention that this was all running
underneath the VM environment? Ralph told VM what storage devices were to be
visible to Linux and at what I/O addresses for their virtual controllers (yes,
Linux actually thinks it's driving the hardware!), how many CPUs this virtual
machine should have, and how much memory. So we had a two-way SMP box with 128
meg RAM and three or four disk "drives" with a couple of gig each--more than enough to play on.
We continued fooling with the Linux system over
our next few lunch hours (had to--this was not exactly a core project
sanctioned by the company's business plan). We downloaded and compiled source
tarballs from the Internet, using the standard ./configure,
make, make install sequence. I was amazed at how much Open
Source software just plain worked.
Then we moved on to the important stuff: X11. As
most experienced Linux users know, X11 is the network protocol that underlies
KDE and GNOME and the other Linux user interfaces, as well as CDE and others
from the UNIX world. Because X11 is a network-transparent protocol, your
display doesn't have to be on the same hardware where the application is
running. The terminal, or console, is actually considered a display
server in this context because it provides graphics services to an
application (the client) which needs to interact with a user. The
mainframe itself doesn't have any graphics hardware, so it can't be an X11
server. But I wanted to see if X11 clients could run on the mainframe but
direct their display to a network-connected terminal.
So I fired up Hummingbird eXceed on my laptop, which runs
Windows NT 4.0. I temporarily turned off its security settings, allowing access
from anywhere. Then on the mainframe side, I set the DISPLAY environment
variable to point to my PC's network address. Voila! I could run graphical
applications like xcalc on the VM Linux machine, but have them
interact with my screen, keyboard, and mouse on my PC. It was really amazing to
see xeyes running side-by-side with Microsoft Office, knowing that
every time the mouse moved it was Linux on an IBM mainframe calculating the
movement of the eyeballs!
During all this time, we never once interrupted
production on the mainframe's primary systems. You see, all of this Linux
activity takes place inside a single virtual machine, one of the five thousand
or so that are running on this particular mainframe at any given time of the
day. Even as root in Linux, you are still just a single session on VM itself.
It was immensely liberating to be able to experiment freely with Linux, knowing
that no matter what we did we could always just login from the CMS environment
and rebuild the whole virtual Linux computer from our saved disk images.
Stop and think about this for a second. The VM
host operating system creates thousands of virtual machines, most of which run
the CMS operating environment for normal users. One virtual machine, however,
happens to boot Linux into its virtual hardware instead. That Linux system is
still fully multiuser and multitasking, though. So I could have dozens of
telnet sessions logged into a single VM Linux virtual system.
This gets better: nobody ever said you could
only run one VM Linux system at a time. In fact, you can run multiples
of Linux just as you run multiples of CMS. Just imagine one physical computer
with several thousand copies of Linux running on it simultaneously, and
each of these supporting multiple user connections. Fantasy? I have heard from
one system administrator, David Boyes at Dimension Enterprises, who decided to
push the envelope on this. His test system finally ran out of resources at
41,400 Linux images. That's not a typo--there were forty-one thousand
copies of Linux running on one logical partition of one mainframe, under VM.
This isn't a practical number for real work (yet) but it's still impressive as
a demonstration of just what VM can do. David joked about wanting to get some
standalone time on a bigger box--after all, he didn't have the whole machine
to himself for this little test! Remember those forty thousand raptors I
mentioned in the introduction?
Adam Thornton of Flathead Software fired up a 390 emulator
called "Hercules" (originally designed to emulate 370-series
mainframes on Intel hardware) underneath Linux underneath VM. Then he ran
another Linux boot underneath that. The hack value of this is just,
well, way cool.
Are you impressed yet? I am.
What Does It All Mean, Really?Impressive as they are, these demonstrations are really nothing
more than lab curiosities unless they accomplish something useful in the real
world. Why would you want to run Linux on a mainframe, especially if you are a
manager or businessperson?
For starters, how about user productivity? Quite a few
companies have staff who use UNIX or Linux for CAD, databases, scientific
computing, and so on. Why train all these people to use another command shell
or menu system on the mainframe, if you can give them a UNIX-like environment
there for free? And since it's user-by-user selectable, you don't have to
force-march all the mainframe-trained staff to the Linux world, either.
What if your company bases most of its enterprise applications
on mainframes but you need some selectively-deployed Linux to meet specific
needs, such as a DNS server or firewall? Simply run it on Linux within a
logical partition or virtual machine. The VM environment is where this really
pays off, because there is literally zero cost for hardware.
In the world of Certain Other Operating Systems (which shall
remain nameless) it is fairly common to dedicate an entire PC server to a
single application or even to a single tier of a complex client/server
application. Part of this is blamed on overhead in the operating system, but
some of the problem is due to limitations of the PC architecture and its
limited I/O systems. No matter how much technology improves, I will always be
able to do more with 20 cubic meters of space than with 2. So a mainframe can
handle many times the concurrent user load of a PC server, and this is unlikely
to change. If you need Linux as a server, therefore, it makes sense to consider
mainframe hardware as a place to host it if you really need to push some
bits.
To me, though, the most interesting possibility for Linux under
VM is a combination of the second and third points above. In addition to
performance and load balancing, multitiered applications are often split across
several machines simply for functional partitioning. This makes it easier, for
example, to upgrade the SQL database back end without worrying about it
breaking the client interface. With Linux on an S/390 running VM, you could run
one virtual Linux as a front-end firewall, to protect against intruders. A
second virtual Linux would run Apache and would be the client interface.
Server-side components, such as Java Servlets, would receive XML-based content
from the back end and format it to the needs of each client. A third virtual
Linux could hold the JavaBeans or CORBA components that create the XML from
database-driven raw content, and which provide the business logic. Finally, the
native VM environment itself supplies a fine platform for a DB2 or Oracle
database to hold the data. The administration of all this would be nicely
partitioned among multiple virtual machines, but there would only be one
physical hardware environment to manage and maintain.
Running Linux under VM has some administrative advantages, too.
For instance, software distribution and backups are greatly simplified.
Resources, such as real memory and MIPS, can be portioned out on an extremely
dynamic basis, depending on the needs of each Linux guest at any given time.
Those LPARs that were mentioned previously are incredibly versatile, much more
so than separate machines would be unless you were willing to move hardware
around day after day.
Finally, here's a possible application from out on the fringes.
Suppose you are a Web-hosting provider and you want to give your clients as
much flexibility as you can without jeopardizing your own systems' security.
Instead of buying a huge farm of PCs, you buy one S/390 mainframe with lots of
RAM and the VM operating system. Now each client company gets their own virtual
Linux machine with full root privileges. They can start and stop their Web
servers, upgrade software, test new code, or whatever, without risk to your
infrastructure. In fact, since the Linux virtual disks are visible as raw data
(not as a filesystem) to the native VM environment, you can even recreate their
default root filesystems from a canned image in seconds, should a new Web
programmer really make a mess of things. A large-scale Web-hosting company
could easily cost justify the price of a mainframe in terms of administrative
costs, site upkeep (one mainframe is a good deal smaller than 1,000 PCs, even
if they are mounted in racks), and disaster recovery support. The ability to
add a new client in minutes, rather than hours or days, would be quite a
marketing advantage in today's fast-paced Web.
Remember, too, that the price of hardware is only one factor in
the total cost of operation of a system. Serious, large-scale applications will
have disaster recovery plans, onsite and/or offsite "hot" spares, and
top-notch (read: expensive) service contracts with the manufacturer. If your
company already owns a mainframe, you are already paying for these things. Why
add more PC hardware and then duplicate all this effort and expense for that,
too?
Of course, you don't have any of these options with NT or
Windows 2000, because they don't run on S/390 and probably never will. But
that's another story.
The Devil's in the DetailsBy now, I hope you're convinced that mainframes and Linux have
a great future together. Even if you don't have access to an IBM mainframe, you
have to admit this is one groovy hack! Now let's take a further look into how
it works and some of the things yet to be done.
The first thing you should know about the mainframe port of
Linux is that there are two of them. The original port was done by Linas
Vepstas, with help from others, and was called "Bigfoot." This port
has never really reached usable status because IBM has now announced their own
port, called Linux for S/390, which has basically stolen the show. The IBM port
came from IBM's labs in Germany, and from what I understand it started out as a
Skunk Works project there.
There's a bit of a controversy between the two camps, and some
folks are not too happy that IBM chose to work independently and in secret
rather than joining the existing porting effort. To IBM's credit, however, they
have open sourced their patches to the stock kernel and the gcc
except for the device driver for their "OSA" network adapter
hardware, whose internal circuitry is still considered a trade secret.
From a technical standpoint, the Bigfoot port is able to run on
older IBM hardware, whereas Linux for S/390 only runs on the newer generations.
Bigfoot is also a more "pure" open-source initiative, having been an
open-source project from its inception, whereas Linux for S/390 was developed
internally by IBM and then published (with source code) afterward. IBM seems to
have stated an intent that Linux for S/390 be an open-source project, but it is
my understanding that they need to execute some legal releases before it can
become part of the stock kernel. Since the IBM port is the one receiving most
or all of the active development work today, and since it is the only one I
have personally used, this article focuses primarily on that version. I respect
Linas Vepstas' work very much, though, and if anyone is interested in
volunteering to help continue that port I would certainly encourage you to do
so.
Setup is, to put it mildly, not exactly plug and play yet. Part
of the problem is that you need a mainframe geek to get the disks allocated and
the image booted, but you need a Linux geek to finish configuring it from
there. Unless you are lucky enough to have one person who knows both areas, a
team approach is probably best.
There are prebuilt binary images of a basic, bootable
filesystem and an initial RAMdisk on the Marist College web site (see the links
on the left side of this page for the URLs). The process of booting is
different depending on whether you are running under the VM operating system,
under a logical partition (LPAR) on a physical processor, or running on the
bare metal. The Marist College site has instructions for all three.
Once the basic system is running, there is a large tarball
containing the rest of the /usr directory tree and some other things
you'll need. After that, configuration is just about the same as configuring
from a bare-bones build of other distributions. The ways in which Linux for
S/390 is very standard were simply amazing to me, given how hugely different
this hardware environment is from Intel-compatible platforms. One of the
coolest features, to me at least, is that it runs natively with the ASCII
character set instead of the EBCDIC codes that are normally used on IBM
mainframes.
Installing most open-source software turns out to be relatively
easy. Just ./configure, make, and make install as
usual. There are a few programs that have trouble with the version of
gcc that is included. The assembler has trouble handling certain kinds
of link references and structures, requiring patches or turning off compiler
optimizations. Quite a lot of code, including Apache 1.3.9, seems to compile
and run just fine, however. Intel-based binary RPMs obviously won't work, nor
will commercial products (such as StarOffice and WordPerfect) that don't
include source code.
A good deal of work still remains. There are some issues about
how to map VM's virtual device addresses (VADDRs, as they are called) to Linux
major and minor node numbers. In VM, the I/O address assigned to a given
virtual device is configured in software, not in hardware. How do you make sure
that they are assigned to the same device special file each time the Linux
system is booted? Also, VM can have more of these VADDRs than Linux allows as
minor node numbers, so the mapping is not 1:1 transparent. There was a very
interesting thread about this online, and it appears that most of Linux
can support more nodes but there was concern about breaking legacy code.
As for X11 applications, I understand that someone has GNOME up
and running on Linux for S/390. KDE probably won't be far behind. That should
give you some idea of the potential of this project.
Networking Outside--and Inside--the BoxNaturally, you'll want to have TCP/IP networking services
running right away. The standard boot images from Marist College support the
IBM "OSA" network adapter hardware, which is in common use, but for
now each Linux image running needs to have its own dedicated OSA port
(fortunately, this is a multiport board, so that's usually possible). Several
methods are being discussed as to how best to share the OSA adapter hardware
across multiple Linux images.
Connecting one Linux image to another within the system
itself is another matter entirely. Here IBM provides a low-level interface
called the Inter User Communications Vehicle (IUCV), which allows memory-mapped
messaging between virtual machines by transferring data from one address space
to another. The IUCV in Linux can be mapped to a set of PPP-like
(point-to-point) adapters called iucv0, iucv1, and so on.
These can be given IP addresses and then used like any other point-to-point
network connections. At the present time, the "other end" of this
link must be tied to a specified virtual machine, which implies that one would
probably dedicate a single virtual Linux machine with multiple IUCV links to
serve as a router for all the other virtual Linux machines, thus saving
physical network ports.
Some have discussed the possibility of making a virtual network
adapter as a wrapper around IUCV, which would mean that all Linux instances on
a single physical mainframe would communicate over an internal "virtual
LAN" at gigabit speeds. Remember those multitiered client/server
applications, or the ISP application, that I mentioned earlier? The IUCV
virtual LAN would be an enormous benefit to such an installation.
By the way, there are some nice security implications to
running multiple Linux instances on a single VM host. Even being root on Linux
doesn't take you beyond being a normal user on the underlying host. So an
intruder who cracks one of the Linux virtual machines doesn't automatically get
access to others, or to the host operating system, unless the system
administrator was nàive enough to use the same password everywhere.
Systems on the Internet are often split across multiple machines to provide
this level of isolation, but Linux under VM on an S/390 can do it all in one
box.
Support and Contributing to Development
There is an extremely active Internet community surrounding the
Linux on S/390 ports (plural), and the Marist College e-mail list is a central
part of that community. Most of the development effort is focused on IBM's port
right now, but Linas Vepstas is a "regular" on the Marist College
list and a good deal of the discussion applies to either port. There are some
extremely smart people, and some damned fine hackers, on that e-mail
list. My degree is in Computer Engineering, and I thought I knew hardware
pretty well until I started chatting with these folks. There are some IBM
engineers from Germany and the United States as well as some third-party
mainframe administrators with long years of experience in the field. Yet I have
found these folks to be exceedingly patient with new people, myself included,
as we each try to understand whichever end of this two-worlds bridge we didn't
yet know.
Contributions to the development effort are most welcome, and
this project is still young enough to have lots of work remaining. There is an
active effort underway to get as many packages ported as possible, and to post
the working-and-tested binaries and source (patched if needed) on a web site. I
am unfortunately not mainframe-literate enough to make any real contribution to
the code itself, but writing this article to bring more awareness to the
project is my own way of trying to "give something back."
There is a fair-to-middling chance that IBM will eventually
make their Linux for S/390 port an officially supported product, but of course
I can't speak for IBM and I don't have any inside tips on that. Right now it's
strictly volunteer, but the volunteers are doing fabulously.
Conclusions/About the AuthorIn case you hadn't noticed, I'm very excited about what has
been accomplished by the two porting teams and by the other contributors.
Getting Linux to run, and to run very well, on hardware this foreign is no mean
feat and these folks deserve a virtual beer for one seriously cool hack. I
learn something new every day from these folks, and I am humbled to be in their
company online.
One of the lessons I've had to learn during my fifteen years in
computing is that sometimes people doing things the old way are doing so for
very good reasons. Linux is a fine mixture of the mature, elegant architecture
of UNIX and the latest ideas from the glitzy world of PCs. Mainframes and their
admins are not the fossils that some people think they are. They are adapting
at remarkable speed to the new open-standards computing model of the Internet,
and bringing to that world their time-tested approach to managing systems that
simply must not ever break down. The synergy between the Linux world and the
mainframe world promises to benefit both, if only we will each open our minds
to the other's point of view.
About the Author
Scott Courtney is a computer engineer at a large manufacturing
company and spends most of his time writing infrastructural middleware in Java,
for intranet and security applications. He suffers in the Windows NT prison by
day, but at night is released to the open skies of Linux. He can be reached by
e-mail to [EMAIL PROTECTED].
Copyright © 2000 by Scott D. Courtney. All Rights
Reserved. For reprint permission, contact the author. Opinions expressed herein
are those of the author alone. They do not necessarily represent the views of
IBM Corporation, of any of the project teams mentioned herein, or of the
author's employer.
IBM, System/390, VM/ESA, and related trademarks are the
property of IBM Corporation. Other trademarks are the property of their
respective owners, and are hereby acknowledged as such.
Copyright © 1999 internet.com Corp.
All Rights Reserved.
|