On Tue, 24 Jun 2008, Anton Altaparmakov wrote:

> Ok.  I have for testing purposes done the changes you suggested and am using a
> file map successfully so we probably will not want/need to use a program map
> given the overhead it would impose.  Much easier to regenerate the file map
> for each user when they log in.  We now have:
> 
> /etc/auto.master:
> /.servers     file:/etc/auto.user
> 
> /etc/auto.user:
> * -fstype=autofs file:/var/run/pwfautomount/auto.&
> 
> And /var/run/pwfautomount/auto.<username> is generated when <username> user
> logs in and for my user aia21 contains exactly the same content as before:

This looks like a very sane way to handle your situation.  

> # Tidy up the user's (auto)mounts.
> (
>        umount /home/$user
>        umount /authcon/$user
>        /sbin/killproc -USR1 /usr/sbin/automount
> ) >/dev/null 2>&1
> 
> And yes, this does expire everybody's non-busy mounts but that is not actually
> a bad thing.  They will get automounted on next use so there is no problem.
> And it has the nice side effect that it will cause broken mounts to expire
> sooner.  (We have a recurring problem where people leaving themselves logged
> in for ages end up with broken connections and NCPfs does not support
> reconnects so this might actually help us.)

So it's a similar deal as CIFS: you need, and don't have, the user's 
plaintext password if the automounter ever expires the mount and then tries 
to reconnect when it's referred to again (in the same session).  Bummer.  
At least with CIFS there's the possibility of getting a Kerberos ticket 
valid on Windows at login time, and making it available via rpc.gssd or 
something like that.  I want to experiment with that but haven't 
been able to make time for it yet -- some of our users would really get 
good use out of such a feature.

It's much better if the filesystem is not mounted when the server or client 
crashes (or some idiot unplugs it).  Automounter expiration is good, if 
feasible for the filesystem type.  It also disposes of resources promptly 
without extra scripting by the sysop that could get broken.

> I assume someone has attached to such a hung machine with gdb and gotten the
> two (or more) stack traces involved in the dead lock to find where in the code
> this happens?  

More like 130 threads to be reported, in my case :-( Yes, I have this all 
automated.  You can take a look at some of the tracebacks in the mailing 
list archive (oink).  The rate of hanging seems to be proportional to the 
square of the rate of mount-expire cycles, being caused by a race condition 
getting a mutex, so it was noticed first on our webservers, which serve 
automounted UserDirs.  Ian Kent and I are making good progress stomping 
this bug (he fixes, I break it again :-).

James F. Carter          Voice 310 825 2897    FAX 310 206 6673
UCLA-Mathnet;  6115 MSA; 405 Hilgard Ave.; Los Angeles, CA, USA 90095-1555
Email: [EMAIL PROTECTED]  http://www.math.ucla.edu/~jimc (q.v. for PGP key)

_______________________________________________
autofs mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/autofs

Reply via email to