On Wed, 21 Mar 2001, Paul Lussier wrote:
>> SMB has higher overhead and an incredibly crufty design. But in some cases,
>> SMB can actually be quite a bit faster than NFS. For example, with
>> oplocks, the client can cache file accesses locally, greatly reducing
>> network traffic.
>
> I don't know how much it reduces traffic, but I tend to doubt that it does
> so enough to matter on a large network.
It depends mostly on whether other clients are also accessing those files in
modes that preclude oplocks. If not, it can drastically improve performance.
Disable disk caching sometime and watch your system slow to a crawl. Same
thing with oplocks.
>>> Additionally, SMB is non-routable, where as NFS just rides on top of IP.
>>
>> This is incorrect. SMB, as implemented in Linux, uses NetBIOS-over-IP,
>> which is as routable as anything-else-over-IP. NetBEUI is not routable,
>> but until very recently, NetBEUI wasn't even possible on Linux.
>
> I don't believe this is accurate ...
From the very first paragraph of RFC-1002 (Protocol Standard for a NetBIOS
Service on a TCP/UDP Transport): "Both local network and internet operation
are supported."
> I wasn't saying that WINS invalidates SMB. I was saying that some things to
> do with Windows Networking can be routed, i.e. WINS, if placed on top of IP.
Um, when exactly would the NetBIOS Name Service for IP (WINS) not be placed
on top of IP? :-)
> Though if your name service is not working correctly, I contend that your
> access to remote systems and the file systems contained therein will be made
> incredibly difficult; the more so if you have users who don't know an IP
> address from a mailing address :)
This is very true. However, it is true for pretty much any network protocol
scheme you think up, including IP+NFS+DNS.
Now, name service is more-or-less essential to NetBIOS, so if your name
service is down, you can consider the network down. This does make SMB less
robust than NFS for this case. On the other hand, I'm not really convinced
your average Unix network is going to continue functioning if DNS (or NIS or
whatever you're using) goes out -- most things rely pretty heavily on name
service in practice.
>> SMB may be ugly, but on Linux, in the past, it has actually been better
>> supported in the kernel than NFS!
>
> Exactly when was that?
Ummmm... I can't remember exactly... and searching around on Google finds
the same damn LKML message in 42 million different mail archives... *sigh* It
was in the early 2.2.x days, IIRC. Early last year, maybe? I dunno. It
happened. But, as you say...
> And does it matter, here and now?
Point.
>> Keep in mind that NFS uses *blind host trust relationships*. A hashed
>> password might not be a very secure authentication method, but it's a damn
>> sight better then simply assuming everything from a given IP is
>> trusted! :-)
>
> However, I contend that if you're going to base your network security on the
> Window's encryption scheme, that you've made life easy enough for anyone that
> there's little difference in that level of security and that of blind
> host-based security.
May be. But calling NFS "more secure" than SMB is never going to be right.
The fact that SMB authenticates per-host makes a difference, too -- with NFS,
you pretty much end up granting access to all user files to anyone with a
packet generator.
>> I consider this a feature, not a bug. Unix file I/O is a stateful thing.
>> Trying to make it stateless is, IMNSHO, an error. :-)
>
> I disagree. At least with NFS, if a server or file system goes away,
> provided you have the environment set up with soft,intr mounts ...
Using soft NFS mounts pretty much defeats the strength of NFS's
statelessness. The whole idea was a server could go away for half a year,
then come back online, and the client would just pick up where it left off.
Unfortunately, in practice, this seems to cause everything *else* to get wedged
in the meantime. Soft mounts fix that by causing operations to time out and
return an I/O error to the user process. Personally, I think that makes
sense, but now you're doing what SMB does -- retrying in the short term,
failing the syscall if it doesn't come back quickly, and reconnecting things
when the server is back.
> Try taking a file server out from under a Windows client and see how long
> it takes the user to realize they need to reboot!
Modern versions of Windows are actually pretty good about reconnecting to
broken network connections. Which is surprising, given how often Windows
crashes for anything else.
>>> 3. Browse lists suck.
>>
>> Don't do that, then!
>
> How exactly do you do that?
It depends on whether you are using DHCP or manually configured addresses.
For DHCP, just set the NetBIOS node type to 2 (AKA "P-node" or"Peer node") on
your DHCP server. For example, with ISC's dhcpd, add this line
option netbios-node-type 2;
to your </etc/dhcpd.conf> file. For manually configured nodes, you have to
twiddle options, which are different for every OS. For example, with Samba,
remove the "bcast" item from the "name resolve order" list in </etc/smb.conf>.
--
Ben Scott <[EMAIL PROTECTED]>
Net Technologies, Inc. <http://www.ntisys.com>
Voice: (800)905-3049 x18 Fax: (978)499-7839
**********************************************************
To unsubscribe from this list, send mail to
[EMAIL PROTECTED] with the following text in the
*body* (*not* the subject line) of the letter:
unsubscribe gnhlug
**********************************************************