On Monday, March 06, 2006 09:17:23 AM -0600 "Christopher D. Clausen" <[EMAIL PROTECTED]> wrote:

It's kind of hard to explain, but Windows people and Unix people think
differently about what they are serving out.  AFS grew up in a world
where Unix admins wanted to distribute executable "applications" as
well as data from a single name space tree.  In the enterprise, Unix
workstations actually run their applications off of the network.

Warning: I am a Windows person.

I'm not at an enterprise.  I'm at a University.

So am I. A University _is_ an enterprise, and a challenging one at that, because of the highly heterogeneous community. At Carnegie Mellon, we use an enterprise-grade distributed filesystem (AFS), and we use it in exactly the way Jeff described. Here's why:

Back in the 1980's (long before I got to CMU), a few people had a grand vision. They imagined a world in which every student and staff member had his or her own small computer. All of these machines would be similar hardware and run the same software, maintained and distributed by a central support group, which would also manage the machines so that individual users wouldn't have to know how. They'd have a friendly graphical interface, and the ability to run more than one application at the same time. All the machines would be tied together into a central system, so that you could sit down in front of any machine, log in, and have access to all of your data -- just like traditional timesharing systems, except that you'd have the processor all to yourself, instead of sharing it with lots of other people. In addition, there would be locations around campus with groups of workstations, which would provide easy access to computing facilities for those who didn't yet have computers of their own, or were not near their own machines (think students between classes).

Sound familiar? It should; these are basic characteristics common to nearly every enterprise (corporate, university, etc) computing environment today, to one degree or another. The difference is that in the 1980's, the basic components of such a system simply didn't exist. So those visionary folks (and, independently, similar groups at several other universities) pitched their idea, got support and funding from various sources, and set out to build a distributed computing environment.

In those days, disks were small and expensive and computers were relatively slow, so there was a limit to how much data could be stored on a single server. Supporting the data-storage needs of an entire university would require a system that supported multiple servers relatively easily. Also, affordable workstations had very little disk -- often not enough even for the complete operating system, let alone the complex software components and applications which would be part of the new system. So, it had to be possible to run applications and even most of the operating system from disks attached to a server somewhere else. To meet these needs, they built a distributed filesystem (vice), which is a direct ancestor of OpenAFS.

The original plan called for central fileservers to store user data and master copies of software, and read-only replica servers in each cluster (lab), to provide operating system and applications software for machines in that cluster. You may think that 100Mbps networks are slow, but trust me, they're blindingly fast compared to 4Mbps token ring. By the time I got here in 1990 or so, 10Mbps coaxial ethernet was common, and the server-per-cluster idea had been basically abandoned in favor of multiple centrally-located replicas.


The point is this - in those days, disk was scarce enough that you _had_ to run not only applications but the OS itself from a central location. The folks working on the Andrew project weren't the only ones to reach this conclusion; several other institutions (MIT and the University of Michigan both come to mind) developed their own filesystems and used them in exactly the same way. Today, we still do, as do many other institutions, though most have given up their own filesystems in favor of AFS (just as we gave up on the Andrew windowing system in favor of X Windows).


-- Jeffrey T. Hutzelman (N3NHS) <[EMAIL PROTECTED]>
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to