At 11:45 AM 3/4/2006, Christopher D. Clausen wrote:
Actually, that is pretty much all it is. Its a namespace for CIFS shares. The replicas work with the File Replication Service (frs) and can operate even without a domain controller, although most places will want the data hosted in active directory. The file replication can be more advanced than one to many replication. Active Directory has a concept of "sites" and this can be utilized to save data transmission time over WAN links between branch offices. Group Policy can be used to point different sets of machines at different servers by default, failing over to other sites when needed.

And for these reasons you can forget common name space ACLs, groups, and such unless you have a single AD where all your shares are from AD domain member servers and you are a member of the domain. And since it is CIFS based, there's no concept of a local cache to reduce network traffic. As well the Dfs root node must be authenticated to, meaning clients at home (or remote on the Internet) must be members of the AD domain that the root node belongs to. And btw, who in their right mind would share out CIFS over the Internet these days? A strict comparison of AFS to Dfs would look something like...

AFS:  Yes, Yes, Yes, Yes, Yes, Yes
Dfs:  No, No, Sometimes, Maybe, Unsure, Not documented, etc. etc...

About the only thing Dfs has going for it -are- the automatic replication facilities, which most people in the AFS world find rather odd. You see, in the AFS world, most of the tree is in the form of read-only replicated sites. Only the user volumes and a few others are read-write. And when a read-write volume goes down, you really don't want to "fail-over". The business of read-write volumes for critical data should be in the form of RAID, not fail-over volumes.

It's kind of hard to explain, but Windows people and Unix people think differently about what they are serving out. AFS grew up in a world where Unix admins wanted to distribute executable "applications" as well as data from a single name space tree. In the enterprise, Unix workstations actually run their applications off of the network. Very few people using AFS actually install applications on their local workstations. This makes application distribution to thousands of workstations into a single vos replicate command. Try that with Windows! Windows application installation and distribution is a royal pain in the ass for admins and is why Windows has such a high cost of ownership for the enterprise. Ever try running applications off your Windows server Dfs tree? For that matter, how many Windows applications have you come across that are actually made to install on the network? AFS has the concept of the "dotted" cell path which gets you to the read-write tree of volumes. At our site we release an application and test it using the "dotted" path before any of our real workstations see it in the "read-only" tree. When it is tested and ready, we issue a vos release command, which updates all the read-only sites and makes the application available to all the workstations. Through blood, sweat, and tears, we've rigged our Windows XP environment to work much like the way our Unix environment works and it has saved us a hell of a lot of admin time.

<on soapbox>My conspiracy theory is that in the Microsoft Windows world of the 90's, there was virtually no incentive for Microsoft to "help" vendors, or establish rigorous rules, methods, and policies for how applications can run off the network. Applications run faster off of the local disk, and Microsoft had no intention of helping vendors like Novell with applications that can run off their network. For sure, even Microsoft didn't want their Office suite of applications to "appear" to run slowly compared to the Unix world, where the apps typically were executed off of NFS shares. It certainly was a big boon for hard drive makers since users keep needing bigger and bigger drives to store their applications on. Remember how slow the network used to be?<off soapbox>

Windows admins on the other hand tend to put DATA, not applications, on the network. Most Dfs trees lead only to user volumes and some shared database files, or common Office shares. The auto-replication facilities of DFS would work fine in this circumstance.

We've setup a Dfs tree along side our AFS tree, but haven't used it much at all. Our primary intention was to have something to go to if the OpenAFS Windows client turned out to be unsupported when Transarc was broken up. Companies like Network Appliance sell NAS gear that can be used in Dfs environments too. Now, the OpenAFS Windows client has great support through developers like Jeffrey Altman of Secure-Endpoints.

Frankly, Microsoft's support of Dfs so far has been abysmal. They really don't promote it at all. They see it as just something that some enterprises might be interested in doing. And only those enterprises that can afford to get enterprise level support contracts get any real help. In effect, Microsoft is using the enterprise for a test-bed of technologies like Dfs. It's only when they actually get enough documentation and features into it that they start to promote it to the world. I mean three years ago, you were lucky to find anything beyond some obscure white papers on Dfs. Dfs does work, but in the real world when most Windows IT groups are interested in setting up big trees of data they actually end up turning to NAS or SAN.

Like you said, Dfs is primarily an all Windows technology anyway. If you need anything heterogeneous then you must turn to AFS.

Rodney

Rodney M. Dyer
Windows Systems Programmer
Mosaic Computing Group
William States Lee College of Engineering
University of North Carolina at Charlotte
Email: [EMAIL PROTECTED]
Web: http://www.coe.uncc.edu/~rmdyer
Phone: (704)687-3518
Help Desk Line: (704)687-3150
FAX: (704)687-2352
Office:  Cameron Applied Research Center, Room 232

_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to