> [snip]
>
>well, actually a few of us were talking about OS2 at the (aforementioned
>and now snipped) social event -- a consultant from Ernst and Young said
>that OS2 has a higher % of "mission critical" applications than any other
>OS (or something like that).
i've heard much the same from a couple of database-product vendors
recently.. their OS of choice is OS/2, and don't really like to develop for
Windows because it's neither as fast nor as stable. one has just started
to migrate to NT, mostly due to market demand, but also because of code
support. whatever else you might say about Microsoft, they have an
excellent suite of programming and debugging tools.
>So, since Mike had explained how the Mac OS determines which app "goes now"
>and which "goes next" .... as well as the DOS model ... I thought a little
>explanation of OS2 was warranted.
darned if i know.. i'm mostly a Mac/unix geek, and have only learned to
speak Windows so i can talk with my friends in Seattle. ;-)
half the guys i knew in college were bought out of their previous jobs
about 3-4 years ago.. about the point where Microsoft realized that their
hold on the desktop market didn't give them enough leverage to replace
TCP/IP with their own, proprietary, networking software.
apparently, that was an interesting time for MS.. their projected
development path was pretty much dead in the water, they needed programmers
with experience in the existing standards, and their own staff didn't have
it. the lead-time necessary to train their own people to the existing
standards would be 'way too long for them to remain competetive in the
market, so they went on a massive hiring spree that left universities all
across the country looking for new sysads and put a serious crimp in my
recreational quaffing schedule.
since i've never had to deal with OS/2 myself, or speak intelligently with
anyone who did tinker with its guts, i can only make a few vague guesses
based on what i've seen IBM do in the past.
historically, IBM has treated desktop computers as tiny, very stupid,
mainframes. given the fact that they were the leading source of
mainframes when the PC revolution took place, that makes sense. their
first PC design, which was admittedly a rush-job, was basically their
smallest minicomputer with most of its insides hacked out and a bit of
custom hardware holding what was left together. the firmware protocols,
which synchronize the way various components talk to each other, were
largely the same.. the original PC had a low-level synch signal intended to
keep multiple processors in step with each other. completely useless on a
single-processor machine, but all the other protocols expected it, so they
had to keep it.
granted, that was about 20 years ago, and Big Blue's corporate mindset has
changed quite a bit since then.. and they still have a large staff of some
of the best computer scientists in the world. their current mainframe OS
is AIX, if i'm not mistaken, and that's basically a unix-like operating
system. mainframes are inherently multi-everything machines these days,
so i'd expect OS/2 to be a single-user version of that.
it probably uses time-sliced preemptive multitasking, because that's just
the best general solution to running multiple programs simultaneously.
(Apple's success with cooperative multitasking is really a fluke, based on
their history of nearly fascist control over development technologies.
they've kept the barn door closed and the horse inside. it's given them
the power to do things other platforms can't, even though it's cost them
points in the market) OS/2 is also, most likely, a single-user OS, so it
probably doesn't worry about segregating areas of memory from each other,
beyond what's necessary to keep programs from stepping on each others'
toes.
i don't know if it's capable of running multiple instances of the same
program or not. single-user OSi usually don't bother with that, because
it can be confusing to the end-user.. launching a separate instance of the
word processor every time you open a document, for instance. OTOH, since
OS/2 needs to be able to run software designed for DOS/Windows, it probably
has some sort of wrapper program. it would be convenient to have multiple
instances of that, so i'm guessing there's a limited, low-level version of
multiprogramming available for those who know what they're doing.
and now, i will remove my blindfold and accept confirmation or refutation
from anyone who actually knows what they're talking about. ;-)
>And UNIX <g>
time-sliced multitasking, multiuser, multiprocessing, multiprogramming.
it's an OS designed for machines that sit in a closet somewhere and have
umpty-odd users logged in from remote terminals at any given moment. it
was, itself, a brutally stripped-down version of a large and almost
literally bulletproof OS developed for the military, called Multics.
Multics was really astonishing in its goals, for its day.
multi-everything and most of it running in paralell.. very few assumptions
about hardware.. hot-swappable everything: drives, processors, whatever..
secure processing, you name it.
Multics never got off the ground, but it did give one of its developers a
basic idea to follow when he found old an PDP-10 in a basement and needed
an OS to run games on it. i think the exact game was 'rogue'.. the
predecessor of NetHack and anscestor of Zork. the word 'unix' was,
"admittedly, a weak pun on Multics", and the rest is history. unix spread
like a brushfire because, like Multics, it made very few assumptions about
hardwar and its source code was written in C, which was also designed to be
machine-independent.
in some ways, the combination of C and unix acted like a virus.. a C
complier can generate binaries (executable files) which will run on a
completely different machine than the one that ran the compiler itself. i
can compile a Win95 executable on my Mac, and vice-versa. it's called
cross-platform compiling. with a cross-platform C compiler, you can
create a unix kernel for another machine, which can then run another C
compiler.. it's called 'bootstrapping' a unix system.
for the sake of argument, say i have a Mac with a C complier, source code
for both unix and the C compiler itself, and an x86 box with a blank hard
drive. i can compile a very small x86-native unix kernel on my Mac, then
write it to a floppy. then i put that floppy in the blank box's slot and
use it to boot the machine, giving it a minimal unix OS. then i build an
x86-native C compiler on my Mac, and copy it over the same way. now i
have a minimal unix box with a compiler. i copy both sets of source code
to the unix box, and triple-compile a new C compiler.
the idea behind triple-compiling is to shake out any bugs which may have
been introduced in a less-than-perfect build by the Mac. even a mediocre
compiler can build a better version of itself if it's creating binaries
for its native environment. if i do that, the second version will
probably be correct, but the only way to be sure is to test it by having it
compile yet another one. v1 compiles v2, and v2 compiles v3, all from the
same source.
assuming everything worked according to the manual, version 2 and version 3
should be identical.. which means they're both correctly optimized for the
current system. then you can throw away v1 and v2 and use v3 as your new,
certified compiler.
using that, you go back to the unix source code and build a new kernel,
which will also be correct for the machine it runs on. you now have a
fully functional unix box with native-compiled kernel and C compiler. all
the utility programs which make up the unix environment are distributed as
source code, so its only a matter of time until you've pulled a
fully-loaded unix system more or less out of a hat.
>BTW -- it's my understanding that NeXt borrowed from the UNIX architecture?
i'm think so, yes. as i understand the mythology, when Jobs started the
project to design the NeXtOS, he hired one of the original developers of
unix. in effect he said, "okay, unix is a great OS, but it's been growing
and changing ever since it was invented. if you could start all over
again, knowing what you know now, what would you do?" and then they did
it.
one of the innovations in the NextOS is another 'multi-' which i didn't
discuss last time, called 'multithreading'. it's extremely cool from a
geekish perspective, but takes a bit of background to understand.
fortunately, you got most of that background while i was talking about
multitasking.
as i mentioned earlier, there's a difference between a 'program' and a
'process'. the program is the set of instructions, and the process is a
program with all the additional system resources it needs to actually run.
it's more or less the same as the difference between a script and a movie.
the operating system is basically a process that runs in the background,
and its primary responsibility is to keep track of all the resources that
turn all the other programs into processes. therefore, a process is the
atomic unit of code as far as the OS is concerned. it's similar to the
way the file is the OS's atomic unit of data storage.. there can be finer
levels of structure in both files and processes, but they're not the OS's
problem.
at least, they weren't until threads came along.
in broad terms, a thread is a piece of code that does a single, specific
thing. a program like a word-processor is complex, handling screen
display, data storage, printing, spell-checking, and what have you. each
specific operation, though, can be seen as a small sub-program which
provides one item in the overall program's feature set. none of those
sub-programs are big enough to survive as processes in their own right..
they need data which is maintained by the word-processor itself.. but it
would be nice to give them a limited degree of independence. that way, i
can spell-check one document in the background as i write another, and
print a third at the same time.
if i want to write a program that offers that kind of flexibility, i
basically have to write my own mini-OS and process scheduler to keep track
of the pieces. that's both tedious and prone to explosions, so it's
easier (in the geek sense of the word) to make 'handling multitasking' one
of the services the OS provides for a process.
the specific details are fairly awful, but the gist of it is that the OS
can look far enough inside a process to see if it has any threads that need
to run concurrently. if so, it switches those threads in and out of
context during the parent process's time slice. each thread has its own
mini-context, but also has access to the context which belongs to the
process as a whole. it's a nice half-step between the flexibility of
fully-independent subprocesses and fully-dependent subroutines.
these days, all the standard operating systems are fully threaded, or at
least offer thread support as an option. the real question is how much of
the software for each OS uses threads, and i don't have the information to
make that kind of a comparison.
mike stone <[EMAIL PROTECTED]>
____________________________________________________________________
--------------------------------------------------------------------
Join The Web Consultants Association : Register on our web site Now
Web Consultants Web Site : http://just4u.com/webconsultants
If you lose the instructions All subscription/unsubscribing can be done
directly from our website for all our lists.
---------------------------------------------------------------------