I do not currently have the time or inclination to address the various points raised in the exchange/commentary, to which I am adding some comments, in sufficient detail.

UC CSRG BSD original deployment largely evolved for DEC hardware platforms, such as the PDP-11 (with segmented overlay memory) and the VAX 11/780 (the first "large deployment" production "non-mainframe" with demand-paged virtual memory), with -- by current standards -- very slow, limited RAM, low bandwidth bus (such as UNIBUS), and "ethernet" NICs such as the DEUNA. All of this hardware platform was much less capable than a current X86-64 home "PC" platform, or even some ARM based "tablets" or "smart phones". BSD did promulgate "sockets", and other innovations. It was somewhat constrained by early ATT Unix, with a version of the Unix source license from ATT. Had the legal games over "unix" not ensued, and had there been some entity similar to CSRG and the community that supported and used BSD (largely "professional"), some of what you wrote might not have happened. However, it was a policy to build each installation of BSD from source, much more tedious than installable "executable" packages from commercial vendors, the same methodology adopted by the Linux community (communities?). Was the BSD C compiler system as good as ultimately the FSF GNU C compiler? Again, a matter for discussion -- but ultimately for a number of reasons, including open source, the GNU compilers have become a widely used implementation. It was a too long time interval before a BSD variant was ported to the first widely deployed IA-32 machine with demand paged virtual memory, the 80386 with 80387 FPU; if memory serves, this was after ATT released a Unix for that platform.

We could discuss the file system debates to some length. Would either the BSD file systems or EXT3, etc., scale to distributed WAN "file systems"? Would either be "reliable" at such scales? I daresay no.

As for ultimate performance (say in minimizing actual CPU clock cycles, memory accesses, etc., per "program execution"), a monolith typically will outperform a microkernel design, just as a traditional unstructured FORTRAN program (or in some cases, an assembly program) will outperform an OO-design C++ program with encapsulation, etc. Which program is longer-term maintainable? Which can be built by a large and dispersed team? These are issues of practical software engineering, and again, a subject of another discussion.

On 12/17/20 5:59 PM, Konstantin Olchanski wrote:
Rumination time, I jump in.

Why did monolithic kernel Linux, based primarily upon the
non-production-environment OS Minix from Tanenbaum used as an
implemented example for teaching OS at the undergraduate level,
achieve sector dominance over micro-kernel BSD-derivatives? ...

Linux killed everybody with superiour performance, for every
competitor, both microbenchmarks and real-use performance
of the Linux kernel were/are measurably better.

[what happened to BSD & derivatives?]
... it boils down to a great deal of uncertainty around BSDI,
UCB's CSRG, Bill Jolitz, and 386BSD, all of which descended from
the Unix codebase ...

The USL-BSD lawsuit came just at the right moment to cut the BSD
movement down at the knees. By the time the dust settled and
BSDs were running on PC hardware, Linux was already established.
correlate the timelines of lawsuit against Linux and BSD timeline:
https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_UNIX-5FSystem-5FLaboratories-2C-5FInc.-5Fv.-5FBerkeley-5FSoftware-5FDesign-2C-5FInc&d=DwIDAw&c=gRgGjJ3BkIsb5y6s49QqsA&r=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A&m=47s95Ne0aOss4Lm0rou8QPxpzTQCi3wXRAuoDuFUQCk&s=RFr_K56_2XQYOg5Dxhg6Opq3kewSaB-o7k4ldfvsSwk&e=
 .

Linux is greenfield

I tend to think that was the key. Linux always had the advantage over BSD
in three areas (if you studied, programmed and used both, you already know 
this):

- better TCP/IP stack in Linux
- better virtual memory system in Linux
- better filesystems in Linux

In all three, Linux had the "green field" advantage, plus the incentive
to beat competitors (at the time, BSD UNIX, SGI/IBM/DEC/SUN Unix derivatives).

In the TCP/IP stack, Linux people implemented zero-copy transfers and support
for hardware-acceleration pretty much right away.

In the VM system they figured out just the right balance between
application memory, kernel memory and filesystem caches, compared to BSD
"active/inactive" (and nothing else).

In filesystems, Linux was the first to solve the problem of "no corruption,
no need for fsck" after unexpected system reboot (i.e. on crash or power loss).
(with ext3/ext4). (ok, maybe SGI was there first for the rich people, with XFS,
but look what happened, XFS is now a mainstrean Linux filesystem).


Although, re-think your statement; Darwin with the macOS skin on it
has a great deal more marketshare than Linux.  In many ways the
BSD-system-layered-on-a-microkernelish core did win; just not the
hearts of developers.


I would say, MacOS "won" not because but despite it's BSD foundations.

If you look behind the curtain (heck, if you look *at* the curtain), you will
see a BSD-ish kernel firmly stuck in the 1990-ies. No semtimedop() syscall, 
incomplete
pthreads (no recursive locks), no /dev/shm (no command line tool to see and
control POSIX shared memory). The only visible kernel level innovation
are the "never corrupts" filesystem (mostly due to "never crashes" hardware, I 
suspect)
and improved VM (encrypted *and* in-memory compressed, impressive!).

Anyhow, today, MacOS wins at ping-pong while the game is hockey, if Apple still
built hardware for serious computing, for sure the MacOS BSD "win" would count.


Reply via email to