On a side light to this topic, I remember an article I read in the late
80's early 90's where someone wrote some 'randomly poke storage'
programs.  Then started them running under different platforms.  As I
remember it, there was some mainframe environment (I forget which), Win NT
3.??, OS/2, Win 3.1, Apple, SCO (Sun?) Unix and some others were
included.  It's been too long for me to remember the numbers but I do
remember being surprised that NT had fatal crashes around 5 significant
figures less often then OS/2, Win 3.1 or Apple.  (OS/2, Win 3.1 and Apple
were about equal, I think OS/2 was a touch better.)

Using NT workstation since version 2, I experienced about the same
reliability as with the SCO Unix workstation of that same era.  Then when
NT 4.0 put the Win 95 front end on NT, the SCO workstation became far more
reliable.

Does anyone know of anyone doing this sort of research now?  Anyone running
this or other crash tests like this on Linux (on or off the MVS environment?)

It is simple code to write, just generate two random numbers, treat one as
an address and one as data and write.  The hard part is doing it on a bunch
of platforms, under a bunch of conditions and collect the numbers.

-Dale


At 02:10 PM 2003_07_29, you wrote:
At one time I did a lot of work with Unix, and I never had any problems with
multiple processes corrupting the memory of other processes.  Have there
been some bugs introduced into Unix recently?  I have not been working with
Unix for a couple of years, unless you count z/OS USS.

On the other hand, I have done some work with Windows over the years, and I
would never try to put multiple applications on a Windows box.  It is hard
enough to keep one application running on one Windows box.

-----Original Message-----
From: Tom Duerbusch [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 10:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Whither consolidation and what then?


My take on multiple images is two fold.


But first, the disclaimer:
This assumes you have sufficient resources in the first place to do
this (normally real memory).

1.  I don't know this to be true with Linux, but the Unix types have
always been leary of having multiple applications running on the same
box.  First, they say that they can't guarentee performance, then they
start talking about an application corrupting the memory of another
application.  So, one application per box if you want reliability.  I
haven't had the experience of memory problems in Linux, yet, so I still
tend to believe this.

2.  Once an application is running and is running good, it should
continue to run correctly until something external happens, like putting
on maintenance.  So, why put on maintenance, other than security
patches?  A new application may need a different gcc library or such.
The origional application, if not fully tested with the new changes, may
fail in production.

At least VM makes it a whole lot easier to define, maintain and control
multiple machines.

Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 07/29/03 11:33AM >>>
Philosophical question?

The heart of the matter lies in why so many images in the first place?
If I need a half dozen images of Linux to service the Web, but those
Linux images can all be running under VM, what is different between
Linux and VM that lets VM handle the concurrent workload better than
Linux can?

It is a variation of the old arguement as to which is better, VM and
serveral VSE guests or one MVS instance.

Dale Strickler Cole Software, LLC Voice: 540-456-8896 Fax: 540-456-6658 Web: http://www.colesoft.com/

Reply via email to