In a message dated: Thu, 20 Dec 2001 12:30:00 PST
"Taylor, ForrestX" said:
>Oh, I thought that you were using the auto.net script to automount using
>machine names. What exactly are you doing, and which OS are you using?
No, we're not doing that.
We're using Linux, depending upon the server, it's either 2.2.20,
2.4.14, or 2.4.16. But right now that's sort of irrellevent, since
all I'm looking for is documentation which explains how the symlink
feature works and how to disable it. Where is the nosymlink option
mentioned. I've looked in man pages for mount, nfs, autofs,
and automount, but haven't seen any mention of 'nosymlink' anywhere.
Is this an undocumented feature?
Anyway, what we're doing is this. We have an nfs server which
exports a bunch of file systems to the world. All desktop clients
mount the file systems via automount. So, though on the server we
have file system names like /home1, /home2, /home3, etc. the desktops
all only see /homes and everyone's homedir "automagically" appears
mounted on /homes/<username>.
We also run autofs on the NFS server itself so that when people log
into the system directly, they have the same view of things as they
do on their desktop systems. They usually log into this system
directly to do things like large software compilations which would
otherwise saturate the network if done via NFS (or so they say:)
When on the NFS server though, autofs creates a symlink from say,
/homes to /home1/pll, so if I do a df I see:
$ pwd
/homes/pll
$ df .
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda1 26209780 25313816 629688 98% /home1
vs. what I see on my desktop:
$ pwd
/homes/pll
$ df .
Filesystem 1k-blocks Used Available Use% Mounted on
zaphod:/home1/pll 26209780 25319552 623952 98% /homes/pll
What I want to do is get the NFS server (zaphod in this case) to
appear exactly as if I'm on my desktop. IOW, I want the NFS server
to truly NFS mount it's own file systems.
I know this probably sounds crazy, but there's good reason for doing
this. We're trying to figure out how to handle a clustered NFS
server environment. Currently if one logs into an NFS server which
is clustered *and* is running autofs, they end up in the directory
structure of the actual file system. If the cluster needs to fail
over for what ever reason, either, you need to implement the cluster
with a 'forced unmount' capability, or, you need to have the file
systems actually be NFS mounted elsewhere other than on the real,
local mount point.
The 'forced unmount' option is ugly, since it will violently kill all
processes which are preventing that file system from being unmounted,
which could result in the user in question losing data.
Unfortunately with the symlink optimization turned on, this is what
we have to resort to. Or, we could manually nfs mount all local
file systems to a second mount point via the fstab file.
This second option is ugly, because it error prone, it forces all
file systems to always be mounted twice, and it requires manual
upkeep in 3 locations (2 /etc/fstabs + automount maps)
If we could turn of symlink optimizations and have the NFS server NFS
mount it's own file systems on demand to another location, we can get
the NFS server to fail over without the user ever noticing. This is
currently exactly what happens when a user is NFS mounting a file
system on to their desktop system from a clustered NFS server now.
Does that help clarify what we're trying to do?
Thanks,
--
Seeya,
Paul
----
God Bless America!
...we don't need to be perfect to be the best around,
and we never stop trying to be better.
Tom Clancy, The Bear and The Dragon