Re: [autofs] Autofs dump map option
Hi Ian, Thanks for the quick help - it works, indeed. Funny thing: - The original patch I have submitted some time ago which was implementing the dump map option for the first time did not dump (without your patch) indirect maps either. - Your official patch implementing the dump map option *did* list the ldap indirect maps even without this fix, it just did not print the indirect nis maps. Why did it work for ldap maps? Anyway - it works fine now. Thanks, Ondrej I think this patch should help. autofs-5.0.6 - fix dumpmaps not reading maps From: Ian Kentra...@themaw.net The lookup modules won't read any indirect map entries (other than those in a file map) unless unless the browse option is set. In order to list the entries when tyhe dumpmap option is given the browse option needs to be set. --- CHANGELOG|1 + lib/master.c |9 + 2 files changed, 10 insertions(+), 0 deletions(-) diff --git a/CHANGELOG b/CHANGELOG index 884a9ae..946a196 100644 --- a/CHANGELOG +++ b/CHANGELOG @@ -3,6 +3,7 @@ - fix ipv6 name for lookup fix. - improve mount location error reporting. - fix paged query more results check. +- fix dumpmaps not reading maps. 28/06/2011 autofs-5.0.6 --- diff --git a/lib/master.c b/lib/master.c index 153a38b..6c89e1d 100644 --- a/lib/master.c +++ b/lib/master.c @@ -1283,6 +1283,15 @@ int master_show_mounts(struct master *master) printf(\nMount point: %s\n, ap-path); printf(\nsource(s):\n); + /* +* Ensure we actually read indirect map entries so we can +* list them. The map reads won't read any indirect map +* entries (other than those in a file map) unless the +* browse option is set. +*/ + if (ap-type == LKP_INDIRECT) + ap-flags |= MOUNT_FLAG_GHOST; + /* Read the map content into the cache */ if (lookup_nss_read_map(ap, NULL, now)) lookup_prune_cache(ap, now); The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communicati...@s3group.com. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] Autofs dump map option
Hi Ian, Thanks for looking after the bugs #538408 and #704416 for me, but I have one slight problem with it: Automouter won't show any keys in my indirect NIS maps unless I specify BROWSE_MODE=yes. Maybe it is expected behaviour so I was thinking - can we always force browse_mode when dumping maps? Sorry for spotting this too late. Thanks, Ondrej The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communicati...@s3group.com. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] autofs misbehaves when DNS RRs returns more ldap servers
On 06.01.2011 08:09, Ian Kent wrote: LDAP_URI=ldap://server1 ldap://server2; You are supposed to be able to do this. Ok I have found the problem. The construction above is working well, indeed. The problem is, that you call get_dc_list() directly in the while loop in function find_server() where its output is not parsed (normally the LDAP_URI config parameter is parsed fine). I think that to fix it we would need to: 1. call the get_dc_list() before the main while loop 2. fix get_dc_list() so it rather than strcatting ldap uris into a single string returns the pure list so that we do not have to parse it again. This way it can be directly processed in the main while loop. But I do not know how would it behave if we had something like this: LDAP_URI=ldap:///something ldap:///something_else;. Maybe two nested loops would be better - anyway I am sure you know where I am pointing now :-) Ondrej The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communicati...@s3group.com. Thank You. Silicon and Software Systems Limited. Registered in Ireland no. 378073. Registered Office: Whelan House, South County Business Park, Leopardstown, Dublin 18___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] NFSv4 to be a default on RHEL-6
On 08.12.2010 13:36, Ian Kent wrote: The default should be determined by mount.nfs(8) since that's what autofs uses to perform mounts. I see, but it works only if the nfs4 root export is the same as /. It does not work otherwise. Example: Server 'dorado' exporting directory /exports which is also fsid=0 for nfs4. There is (also shared) subdirectory 'ext1' in this one. When I do: cd /net/dorado/exports/ext1 ... the export is mounted using NFSv3. Theoretically if I did: cd /net/dorado/ext1 ... I should have the same mounted via NFSv4, right? Unfortunately it does not work. But it should because: mount dorado:/ext1 /mnt works (giving nfs4 mount) Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] dump map option support
Hi Ian, Any news about our new dump map option? I just found out that F-13 still does not have it :-( I am also wondering if you have any plans about supporting sssd (just a quick question). Many thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Automounter to dump maps
Hi Ian, I have some free time now so I was thinking about trying to write something on my own here. Have you already started or can I just pick the latest source rpm for RHEL? Thanks, Ondrej P.S. Just a matter of interest - why are you using double linked list for master map? Isn't a dynamic array better? Love dynamic arrays :-) Ian Kent wrote: Ondrej Valousek wrote: No, that's the right thing to do. I'm still thinking about how to do it, while working on other things. It's a bit hard actually. I think I understand - automounter is demand driven right? So that means that only those indirect maps being used are looked for and loaded. Direct maps is an exception to this - these are always loaded. Anyway - hope you'll eventually find some elegant solution - I appreciate this. It's a bit more than that, it's the code structure as well. Something that Jeff has complained about before. Anyway, hopefully I'll come up with something. Ian ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Automounter to dump maps
No, that's the right thing to do. I'm still thinking about how to do it, while working on other things. It's a bit hard actually. I think I understand - automounter is demand driven right? So that means that only those indirect maps being used are looked for and loaded. Direct maps is an exception to this - these are always loaded. Anyway - hope you'll eventually find some elegant solution - I appreciate this. Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] Autofs Negative cache
Hi Ian, Can we expect someday autofs in RHEL-5 to have a working negative cache? Seems like it still does not work in the last update (5.4): Sep 14 09:14:57 dorado_v1 automount[16281]: handle_packet_missing_indirect: token 132, name test, request pid 16183 Sep 14 09:14:57 dorado_v1 automount[16281]: attempting to mount entry /proj/test Sep 14 09:14:57 dorado_v1 automount[16281]: file map not found Sep 14 09:14:57 dorado_v1 automount[16281]: lookup_mount: lookup(yp): looking up test Sep 14 09:14:57 dorado_v1 automount[16281]: ioctl_send_fail: token = 132 Sep 14 09:14:57 dorado_v1 automount[16281]: failed to mount /proj/test Sep 14 09:14:58 dorado_v1 automount[16281]: handle_packet: type = 3 Sep 14 09:14:58 dorado_v1 automount[16281]: handle_packet_missing_indirect: token 133, name test, request pid 16183 Sep 14 09:14:58 dorado_v1 automount[16281]: attempting to mount entry /proj/test Sep 14 09:14:58 dorado_v1 automount[16281]: file map not found Sep 14 09:14:58 dorado_v1 automount[16281]: lookup_mount: lookup(yp): looking up test Sep 14 09:14:58 dorado_v1 automount[16281]: ioctl_send_fail: token = 133 Sep 14 09:14:58 dorado_v1 automount[16281]: handle_packet: type = 3 Sep 14 09:14:58 dorado_v1 automount[16281]: handle_packet_missing_indirect: token 134, name test, request pid 16183 Sep 14 09:14:58 dorado_v1 automount[16281]: failed to mount /proj/test Sep 14 09:14:58 dorado_v1 automount[16281]: attempting to mount entry /proj/test Sep 14 09:14:58 dorado_v1 automount[16281]: file map not found Sep 14 09:14:58 dorado_v1 automount[16281]: lookup_mount: lookup(yp): looking up test Sep 14 09:14:58 dorado_v1 automount[16281]: ioctl_send_fail: token = 134 Sep 14 09:14:58 dorado_v1 automount[16281]: failed to mount /proj/test Sep 14 09:14:58 dorado_v1 automount[16281]: handle_packet: type = 3 Sep 14 09:14:58 dorado_v1 automount[16281]: handle_packet_missing_indirect: token 135, name test, request pid 16183 Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Autofs Negative cache
You'll need to be a bit more specific about what is wrong. Ok, my fault - I thought autofs would say something like negative cache hit if the key is found in the negative cache, After more detailed log inspection I could see that the actual mount requests are suppressed by the negative caching. Sorry for the noise Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Autofs 5.0.1-0rc2.102 failing to query LDAP (Windows 2008 AD)
Hi Jack, I've been trying to avoid GSSAPI, because I believe it requires the machine to be a fully paid-up member of the AD. In my environment that's very tricky to impossible[1]. Ok, you might also want to try simple authentication or even anonymous access to AD - that should work, too (and would also be easier to deploy in your diskless environment) - I just did not cover it in my blog as it is insecure. The only thing I know that authentication using SASL/DIGEST-MD5 does not work because of the bug I mentioned. 1. Some of the longer lines in the quoted files appear truncated. They cut-n-paste fine though. 2. I've found that removing /var/cache/samba/winbind* seems to work for cache clearing. 3. You probably mean getent passwd (instead of password), and for some reason in my case it still doesn't return the AD users (though wbinfo -u does). The users can still authenticate though. Thanks for the hints - I have updated the blog (I know it truncates long lines, unfortunately there is nothing I can do with it). Cheers, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Autofs 5.0.1-0rc2.102 failing to query LDAP (Windows 2008 AD)
There is no problem with autofs - the real problem is, that windoze do not follow RFC's in subsequent authentication (which autofs is using). I have reported the problem to Microsoft and they agreed (internal bugreport was generated). The workaround is to use GSSAPI authentication instead - more at ondarnfs.blogspot.com Ondrej Jack Challen wrote: Hello, My problem appears to be very similar to: http://www.opensubscriber.com/message/autofs@linux.kernel.org/11281928.html I'm trying to make autofs get its information from LDAP (stored on a Windows 2008 AD). I believe autofs is failing to authenticate properly. It appears that the sasl_log_func function is doing the authentication steps in the wrong order (based on reading of the log files). (FWIW, I've made this work storing info in OpenLDAP, and doing anonymous binds, but I plan to use AD's LDAP functionality). Here's what works (in that it gets some information): ldapsearch -h addns -Y DIGEST-MD5 -U ldap.query -w secret -b cn=auto.master,dc=cm,dc=domain,dc=com When I configure /etc/autofs_ldap_auth.conf to contain the following: autofs_ldap_sasl_conf authtype=DIGEST-MD5 authrequired=yes user=ldap.query secret=Secret usetls=no tlsrequired=no / I get the following logs Sep 2 17:42:10 rhelbase automount[14835]: autofs stopped Sep 2 17:42:10 rhelbase automount[14866]: Starting automounter version 5.0.1-0.rc2.102, master map ldap://addns/ Sep 2 17:42:10 rhelbase automount[14866]: using kernel protocol version 5.00 Sep 2 17:42:10 rhelbase automount[14866]: lookup_nss_read_master: reading master ldap //addns/ Sep 2 17:42:10 rhelbase automount[14866]: parse_server_string: lookup(ldap): Attempting to parse LDAP information from string ldap://addns/;. Sep 2 17:42:10 rhelbase automount[14866]: parse_server_string: lookup(ldap): mapname Sep 2 17:42:10 rhelbase automount[14866]: parse_ldap_config: lookup(ldap): ldap authentication configured with the following options: Sep 2 17:42:10 rhelbase automount[14866]: parse_ldap_config: lookup(ldap): use_tls: 1, tls_required: 0, auth_required: 2, sasl_mech: DIGEST-MD5 Sep 2 17:42:10 rhelbase automount[14866]: parse_ldap_config: lookup(ldap): user: ldap.query, secret: specified, client principal: (null) credential cache: (null) Sep 2 17:42:10 rhelbase automount[14866]: sasl_bind_mech: Attempting sasl bind with mechanism DIGEST-MD5 Sep 2 17:42:10 rhelbase automount[14866]: sasl_log_func: DIGEST-MD5 client step 2 Sep 2 17:42:10 rhelbase automount[14866]: getuser_func: called with context (nil), id 16386. Sep 2 17:42:10 rhelbase automount[14866]: getuser_func: called with context (nil), id 16385. Sep 2 17:42:10 rhelbase automount[14866]: getpass_func: context (nil), id 16388 Sep 2 17:42:10 rhelbase automount[14866]: sasl_log_func: DIGEST-MD5 client step 3 Sep 2 17:42:10 rhelbase automount[14866]: sasl_bind_mech: sasl bind with mechanism DIGEST-MD5 succeeded Sep 2 17:42:10 rhelbase automount[14866]: do_bind: lookup(ldap): auth_required: 2, sasl_mech DIGEST-MD5 Sep 2 17:42:10 rhelbase automount[14866]: sasl_bind_mech: Attempting sasl bind with mechanism DIGEST-MD5 Sep 2 17:42:10 rhelbase automount[14866]: sasl_log_func: DIGEST-MD5 client step 1 Sep 2 17:42:10 rhelbase automount[14866]: getuser_func: called with context (nil), id 16386. Sep 2 17:42:10 rhelbase automount[14866]: getuser_func: called with context (nil), id 16385. The bit that makes me wonder is the DIGEST-MD5 client steps go in the order 2,3,2,1. It also says the bind succeeded at one point, but appears to carry on. If I use a deliberately wrong user, I get this: Sep 2 17:41:10 rhelbase automount[14771]: autofs stopped Sep 2 17:41:10 rhelbase automount[14803]: Starting automounter version 5.0.1-0.rc2.102, master map ldap://addns/ Sep 2 17:41:10 rhelbase automount[14803]: using kernel protocol version 5.00 Sep 2 17:41:10 rhelbase automount[14803]: lookup_nss_read_master: reading master ldap //addns/ Sep 2 17:41:10 rhelbase automount[14803]: parse_server_string: lookup(ldap): Attempting to parse LDAP information from string ldap://addns/;. Sep 2 17:41:10 rhelbase automount[14803]: parse_server_string: lookup(ldap): mapname Sep 2 17:41:10 rhelbase automount[14803]: parse_ldap_config: lookup(ldap): ldap authentication configured with the following options: Sep 2 17:41:10 rhelbase automount[14803]: parse_ldap_config: lookup(ldap): use_tls: 1, tls_required: 0, auth_required: 2, sasl_mech: DIGEST-MD5 Sep 2 17:41:10 rhelbase automount[14803]: parse_ldap_config: lookup(ldap): user: 1ldap.query, secret: specified, client principal: (null) credential cache: (null) Sep 2 17:41:10 rhelbase automount[14803]: sasl_bind_mech: Attempting sasl bind with mechanism DIGEST-MD5 Sep 2 17:41:10 rhelbase automount[14803]: sasl_log_func: DIGEST-MD5 client step 6 Sep 2 17:41:10 rhelbase automount[14803]: getuser_func: called
[autofs] Spam: Autofs Active Directory integration
http://ondarnfs.blogspot.com/ Hopefully I have written it the way so it is understandable to everyone :-) ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] ldap and reloading
Talking about ldap maps reloading - maybe it is worth mentioning that if you configure automounter using the recently added DNS SRV support, automounter reloads itself as per zone TTL in DNS (that includes the master map, too) - right Ian? Ondrej Ian Kent wrote: Stef Bon wrote: Ian Kent wrote: Stef Bon wrote: Hello, when using static file maps, or even executable maps, when the map changes, you'll have to reload the daemon to make these changes effective. How does this work when all the data is in ldap? Does the automounter still creates a sort of snapshot of the map in memory, and to reread the data provided by ldap, a reload is neccesary? ldap is well know for that this is not needed. For example with postfix, it can use static maps, and when something changes, it has te be restarted. But when the lookup data (for local users for example) is in ldap, this is not neccesary. That's not quite right. If you use the browse option then the entire map must be read in at start. If not then autofs remembers entries that it has seen and attempts to check their currency at lookup. Each lookup should check if the entry is still up to date and attempts to work out if the map has changed (although it's not quite as simple as that). If we think the map has changed a re-load should be triggered internally. Following the (or any) re-load there is a cleanup which is probably why it looks like map changes aren't seen. Any changes in multi-mount entries cannot be seen until after they have expired away, because of the need to maintain the context of the entry over the duration of the mount. Direct maps don't quite do this properly, partly because of the way they work and partly because of an issue I haven't addressed yet. Clearly, with program maps, we need to rely on the re-load to a large extent but a best effort is made to work out if the entry is stale, however, we just don't have anything really to use to establish this, so a re-load is needed to clean them up. Ok Ian, maybe you're right. You probably are, I cannot discuss this with you, you're the expert. You say that it may look like map changes aren't seen. How, as developer of a construction making use of autofs, can I see/check it's reloaded. Are there any indications, or triggers which always lead to this internal reloading? BTW I also use reloading when a map/mountpoint is added or removed, so not only when a map changes. OK, but I think we're not talking about the same thing. I always think about map entries when this come up but I suspect you are thinking of master map entries which are different. There are no lookups for the master map and so there is no opportunity to check if the map has changed. To see changes to the master map a re-load is needed. I know multimount entries are difficult, and are handled as one. But how about ldap? I assume you're talking about how maps are handled here in general, but is this different/the same with ldap?? As far a the master map goes there isn't any real difference which source is used. As far as map entries go LDAP is probably the worst since it has no standard, consistent information about when it was last updated regardless of whether those entries are the master map or map entries themselves. Ian ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] autofs maps stored in AD
Hi list (Ian, all), I am using automounter with the maps stored in Active Directory (no joy with the SASL/DIGES-MD5, filed bug with M$, but GSSAPI seems to work OK). Everything works like a charm using RFC2307 schema (already present in Win2008). The only problem/question I have is regarding the possibility of using DNS SRV records to locate the Domain Controller / ldap server. Currently, I have to hard encode the name of the DC in /etc/openldap/ldap.conf (or /etc/sysconfig/autofs) and it would be just nice if autofs could use the SRV records the same way nss_ldap library does if no LDAP server is specified. Many thanks, Ondrej The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communicati...@s3group.com. Thank You. Silicon and Software Systems Limited. Registered in Ireland no. 378073. Registered Office: Whelan House, South County Business Park, Leopardstown, Dublin 18 ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] krb5 required when building with sasl support
sasl could be (theoretically) built without GSSAPI support and in this case it would not require Kerberos. But the facts are: - sasl support in autofs can not be obviously compiled without gssapi - in these days sasl is many times meant as synonym for sasl/gssapi so it probably does not make any sense not to include gssapi once we include sasl. Ondrej Matthias Koenig wrote: Guillaume Rousse guillaume.rou...@inria.fr writes: More seriously, I think the proper solution would rather be fixing underliking in SASL (http://wiki.mandriva.com/en/Underlinking). Hmm, are you really sure, that libsasl depends on kerberos symbols? In this case you would be right. I assumed that they're independent from each other and after a quick grep I couldn't find any krb5_* symbols in libsasl. At least the lookup_ldap.h header from autofs explicitly includes the krb5.h header. Matthias ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] krb5 required when building with sasl support
+1 - I have spotted this, too O. Matthias Koenig wrote: Hi, it seems that it is necessary to link also against krb5 when building the LDAP lookup module with SASL support. This is missing currently in the configure.in it only links with -lsasl. The following patch should fix this. Regards, Matthias Index: autofs-5.0.4/configure.in === --- autofs-5.0.4.orig/configure.in2008-11-04 02:36:48.0 +0100 +++ autofs-5.0.4/configure.in 2009-02-04 17:30:01.0 +0100 @@ -256,8 +256,13 @@ AC_ARG_WITH(sasl, if test -z $HAVE_SASL -o $HAVE_SASL != 0 -a $HAVE_LIBXML == 1 then HAVE_SASL=0 - AC_CHECK_LIB(sasl2, sasl_client_start, HAVE_SASL=1 LIBSASL=$LIBSASL -lsasl2, , -lsasl2 $LIBS) +HAVE_KRB5=0 + AC_CHECK_LIB(sasl2, sasl_client_start, HAVE_SASL=1,, -lsasl2 $LIBS) +AC_CHECK_LIB(krb5, krb5_mk_req_extended, HAVE_KRB5=1,, $LIBS) if test $HAVE_SASL == 1; then + test $HAVE_KRB5 != 1 \ +AC_MSG_FAILURE([You need krb5 libs to build with SASL support]) +LIBSASL=$LIBSASL -lsasl2 -lkrb5 AC_DEFINE(WITH_SASL,1, [Define if using SASL authentication with the LDAP module]) fi ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] auto.master in ldap + simple bind
There is something rotten in the lookup_ldap.c but I can not point my finger on it. Things go bad in the lookup_init() function: 5 4.389459 192.168.60.171 - 192.168.60.172 LDAP bindRequest(1) ROOT sasl 6 4.390383 192.168.60.172 - 192.168.60.171 LDAP bindResponse(1) saslBindInProgress 7 4.390396 192.168.60.171 - 192.168.60.172 TCP 39957 ldap [ACK] Seq=27 Ack=218 Win=6912 Len=0 TSV=17330479 TSER=592592279 8 4.390846 192.168.60.171 - 192.168.60.172 LDAP bindRequest(2) ROOT sasl 9 4.392733 192.168.60.172 - 192.168.60.171 LDAP bindResponse(2) success 10 4.393095 192.168.60.171 - 192.168.60.172 LDAP bindRequest(3) ROOT sasl 11 4.394062 192.168.60.172 - 192.168.60.171 LDAP bindResponse(3) invalidCredentials (00090313: LdapErr: DSID-0C0904D1, comment: AcceptSecurityContext error, data 0, v1771) 12 4.394188 192.168.60.171 - 192.168.60.172 LDAP unbindRequest(4) Packet 8,9 - we connect to the server to verify the authentication mechanism, but then we should drop the connection - line 1286 - call to ldap_unbind_connection(). But this never happens according to the tcpdump. Instead, another bind follows and fails. The question is now: 1. Why is there no unbindRequest packet? In general, I see 3 bind requests but only one unbindrequest 2. Why the second bindRequest fails and the first one succeeds? I do not want to be too picky, but Windows Server 2008 is the first server OS from MS to support RFC2307 LDAP schema so I believe we should be able to connect to it. I have opened a case #1887566 with RedHat regarding this Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] auto.master in ldap + simple bind
Ian, To recap: Win2k8 comes with RFC2307 compliance so I wanted to try to connect autofs (all maps) to it. I did not want to play with GSSAPI - it is too complicated. But neither I wanted simple anonymous bind - too insecure. So I see Win2k8 supports SASL/DIGEST-MD5, verified with ldapsearch that it works, I also see autofs5 supports it - so I wanted to use it. Unfortunately it is broken at the autofs side (see my previous post). Ondrej Have you tried GSSAPI, doesn't Windows require Kerberos auth by default? Are you sure that the Windows server is allowing simple binds (that was what you wanted right)? Ian ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] auto.master in ldap + simple bind
What is the actual SASL user dn? Does your ldapsearch work without the -b option? With SASL, we do not talk about user DN, we talk about authentication ID for SASL bind instead. This is an example of ldapsearch that works for me against Win2k8: ldapsearch -H ldap://192.168.60.172 -Y DIGEST-MD5 -U ldapproxy -w 1234proxy$ -b cn=praguetest,cn=prague,dc=ad,dc=s3group,dc=cz objectClass=* cn objectClass nisMapName nisMapEntry ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] auto.master in ldap + simple bind
I do not know what you are after. The -b option is no significance for the authentication process. Anyway - it works without it, too (just tried). Ondrej I know but what happens to the authentication attempt if you do not specify the -b option. ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] auto.master in ldap + simple bind
Show us the logs. Hi Ian, I did some digging around and found this: 1. autofs 5 as shipped with RHEL 5.2 does not seem to support simple bind (i.e. something like ldapsearch -x .) to a LDAP server not supporting anonymous access - like Active Directory (note for the record: Autofs 4 does only support anonymous ldap server) 2. The only other thing autofs 5 can do is various SASL authentication schemes (GSSAPI, PLAIN,.). 3. Active Directory can do SASL and the common mechanisms that both can do is GSSAPI and DIGEST-MD5. 4. I tried with DIGEST-MD5: [r...@dorado_v1 etc]# cat /etc/sysconfig/autofs LDAP_URI=ldap://WIN-UG29HR9IEGY; SEARCH_BASE=cn=praguetest,cn=prague,dc=ad,dc=s3group,dc=cz [r...@dorado_v1 etc]# cat /etc/autofs_ldap_auth.conf autofs_ldap_sasl_conf usetls=no tlsrequired=no authrequired=yes authtype=DIGEST-MD5 user=ldapproxy secret=1234proxy$ / Verified with ldapsearch its functionality: [r...@dorado_v1 etc]# ldapsearch -H ldap://WIN-UG29HR9IEGY -Y DIGEST-MD5 -U ldapproxy -w 1234proxy$ -b cn=praguetest,cn=prague,dc=ad,dc=s3group,dc=cz objectClass=nisMap SASL/DIGEST-MD5 authentication started SASL username: ldapproxy SASL SSF: 128 SASL installing layers # extended LDIF # # LDAPv3 # base cn=praguetest,cn=prague,dc=ad,dc=s3group,dc=cz with scope subtree # filter: objectClass=nisMap # requesting: ALL # # auto.master, praguetest, prague, ad.s3group.cz dn: CN=auto.master,CN=praguetest,CN=prague,DC=ad,DC=s3group,DC=cz objectClass: top objectClass: nisMap cn: auto.master distinguishedName: CN=auto.master,CN=praguetest,CN=prague,DC=ad,DC=s3group,DC= cz instanceType: 4 whenCreated: 20090116124656.0Z whenChanged: 20090116124656.0Z uSNCreated: 20610 uSNChanged: 20610 showInAdvancedViewOnly: TRUE name: auto.master objectGUID:: 2T1wg8oG70G3VpHKlieoWQ== objectCategory: CN=NisMap,CN=Schema,CN=Configuration,DC=ad,DC=s3group,DC=cz dSCorePropagationData: 1601010100.0Z nisMapName: auto.master eheeej should for with the automounter, ok? But it does not: Jan 19 11:55:41 dorado_v1 automount[22886]: Starting automounter version 5.0.1-0.rc2.88.el5_2.1, master map auto.master Jan 19 11:55:41 dorado_v1 automount[22886]: using kernel protocol version 5.00 Jan 19 11:55:41 dorado_v1 automount[22886]: lookup_nss_read_master: reading master files auto.master Jan 19 11:55:41 dorado_v1 automount[22886]: parse_init: parse(sun): init gathered global options: (null) Jan 19 11:55:41 dorado_v1 automount[22886]: lookup_read_master: lookup(file): read entry /misc Jan 19 11:55:41 dorado_v1 automount[22886]: lookup_read_master: lookup(file): read entry /net Jan 19 11:55:41 dorado_v1 automount[22886]: lookup_read_master: lookup(file): read entry +auto.master Jan 19 11:55:41 dorado_v1 automount[22886]: lookup_nss_read_master: reading master files auto.master Jan 19 11:55:41 dorado_v1 automount[22886]: parse_init: parse(sun): init gathered global options: (null) Jan 19 11:55:41 dorado_v1 automount[22886]: lookup_nss_read_master: reading master ldap auto.master Jan 19 11:55:41 dorado_v1 automount[22886]: parse_server_string: lookup(ldap): Attempting to parse LDAP information from string auto.master. Jan 19 11:55:41 dorado_v1 automount[22886]: parse_server_string: lookup(ldap): mapname auto.master Jan 19 11:55:41 dorado_v1 automount[22886]: parse_ldap_config: lookup(ldap): ldap authentication configured with the following options: Jan 19 11:55:41 dorado_v1 automount[22886]: parse_ldap_config: lookup(ldap): use_tls: 0, tls_required: 0, auth_required: 2, sasl_mech: DIGEST-MD5 Jan 19 11:55:41 dorado_v1 automount[22886]: parse_ldap_config: lookup(ldap): user: ldapproxy, secret: specified, client principal: (null) credential cache: (null) Jan 19 11:55:41 dorado_v1 automount[22886]: find_server: trying server ldap://WIN-UG29HR9IEGY Jan 19 11:55:41 dorado_v1 automount[22886]: sasl_bind_mech: Attempting sasl bind with mechanism DIGEST-MD5 Jan 19 11:55:41 dorado_v1 automount[22886]: sasl_log_func: DIGEST-MD5 client step 2 Jan 19 11:55:41 dorado_v1 automount[22886]: getuser_func: called with context (nil), id 16386. Jan 19 11:55:41 dorado_v1 automount[22886]: getuser_func: called with context (nil), id 16385. Jan 19 11:55:41 dorado_v1 automount[22886]: getpass_func: context (nil), id 16388 Jan 19 11:55:41 dorado_v1 automount[22886]: sasl_log_func: DIGEST-MD5 client step 3 Jan 19 11:55:41 dorado_v1 automount[22886]: sasl_bind_mech: sasl bind with mechanism DIGEST-MD5 succeeded Jan 19 11:55:41 dorado_v1 automount[22886]: do_bind: lookup(ldap): auth_required: 2, sasl_mech DIGEST-MD5 Jan 19 11:55:41 dorado_v1 automount[22886]: sasl_bind_mech: Attempting sasl bind with mechanism DIGEST-MD5 Jan 19 11:55:41 dorado_v1 automount[22886]: sasl_log_func: DIGEST-MD5 client step 1 Jan 19 11:55:41 dorado_v1 automount[22886]: getuser_func: called with context (nil), id 16386. Jan 19 11:55:41 dorado_v1 automount[22886]: getuser_func: called with context (nil), id 16385. Jan 19 11:55:41
[autofs] auto.master in ldap + simple bind
Hi all, I am trying to configure autofs (RHEL 5.2) to gather all maps from Active Directory using simple bind using proxy user. I have already managed to configure the PADL nss switch to do so using this: host 192.168.60.172 base dc=ad,dc=s3group,dc=cz binddn cn=ldapproxy,cn=Users,dc=ad,dc=s3group,dc=cz bindpw password Now I am wondering how to do the same with the automounter. Does anyone know? I see lots of options on how to configure TLS or SASL, but I just need a simple bind. Many thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Help on nfs4 interpolarity between Solaris 10 and Linux
BTW: I would not recommend using NFSv4 with RHEL4 on a production system. It is unstable and I have easily managed to crash the system while using it. Go for RHEL5. Ondrej Ian Kent wrote: On Wed, 2008-12-17 at 10:42 -0500, Lohin, Daniel wrote: We are using Red Hat 4.5 with autofs 4.1.3. We are in a mixed Solaris/Linux environment. We have an automountMapName that needs to support both NFS 3 and NFS4. To complicate things, the solution must work on both Solaris and Linux. Here is what I have: Have you tried looking at a debug log of what's happening? See http://people.redhat.com/jmoyer for information about setting debug logging. AUTO_MASTER: dn: Automountkey=/-,automountMapName=auto_master,dc=foo,dc=bar automountInformation: auto_direct automountKey: /- objectClass: top objectClass: automount dn: automountkey=/.hidden,automountMapName=auto_master,dc=foo,dc=bar automountInformation: auto_hidden automountKey: /.hidden objectClass: top objectClass: automount Auto_hidden: dn: automountkey=hiddenNfs4,automountMapName=auto_hidden,dc=foo,dc=bar automountInformation: -fstype=nfs4 server:/ automountKey: hiddenNfs4 objectClass: top objectClass: automount dn: automountkey=*,automountMapName=auto_hidden,dc=foo,dc=bar automountInformation: server2,server3,server4,server5:/vol/ automountKey: hiddenMain objectClass: top objectClass: automount From above in * of auto_hidden this must be nfs3 as that is all that is supported by the servers in that automount. In the hiddenNfs4 automountkey this must be nfs4 as it has to cross a firewall. The * is working perfectly. The problem is the hiddenNfs4 automount map. I can get it to work with Solaris with the following: dn: automountkey=hiddenNfs4,automountMapName=auto_hidden,dc=foo,dc=bar automountInformation: -vers=4 server:/ automountKey: hiddenNfs4 objectClass: top objectClass: automount Linux will work this this: dn: automountkey=hiddenNfs4,automountMapName=auto_hidden,dc=foo,dc=bar automountInformation: -fstype=nfs4 server:/ automountKey: hiddenNfs4 objectClass: top objectClass: automount Solaris will work with this, but fail for Linux dn: automountkey=hiddenNfs4,automountMapName=auto_hidden,dc=foo,dc=bar automountInformation: -fstype=nfs4,-vers=4 server:/ automountKey: hiddenNfs4 objectClass: top objectClass: automount Solaris will also work with this: dn: automountkey=hiddenNfs4,automountMapName=auto_hidden,dc=foo,dc=bar automountInformation: server:/ automountKey: hiddenNfs4 objectClass: top objectClass: automount Solaris looks like it tries nfs4 and then if that fails it will continue to try 3, 2, etc…. What I need is either an automountmap entry that works with both or a way to have Linux mirror Solaris in trying NFS4 first and not requiring any options. I don't know what's going on from this information but, depending on mount(8), one or more of these should work. Look at the debug log to find that out what is failing. Linux mount(8) defaults to v3 ... so you can't make Linux work like Solaris in this case. Ian ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] automounter segfaults when using negative cache?
I thought all the negative caching is done in user space, but apparently kernel has some influence here Any explanation? The negative caching is done in userspace. I'm not sure I understand your question. Cheers, Jeff Hi Jeff, Ok I was thinking this way: I experience this only on U7 system running U4 kernel - on the same U7 system running U7 kernel I can not replicate this. If the negative caching is done in the user space and there is a bug there which causes the autofs to segfault, how is it possible that is won't segfault on the U7 kernel? I do not know I am still trying to find out if there is any connection to the problem with the automounter hanging that I reported earlier as: - in U7 system running U7 kernel, the autofs does not crash, but hangs (after few days). - in U7 system running U4 kernel, the autofs forks crash regularly but otherwise it works fine (I mean at least the parent autofs fork continue living...) You know what I am trying to say? Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] Low priority question: negative cache in autofs5
Probably to Ian: I am just looking in the autofs5 (from RHEL5) source. I have an indirect nis map: auto.master: /appli auto.appli /proj auto.proj Now, auto.proj contains a wildcard entry (* nfsserver:/share/) but auto.appli does not. 1. Every time I try to enter (shortly after each other) /proj/nonexistent_directory, I got this: Dec 5 12:10:45 ara automount[3101]: rmdir_path: lstat of /proj/nonexistent_directory failed ... why did not the negative cache save us the expensive call to lstat? 2. Every time I try to enter (shortly after each other) /appli/nonexistent_directory, I got this: Dec 5 12:11:41 ara automount[3101]: lookup_mount: lookup(yp): key nonexistent not found in map ... again, looks like the nis daemon was consulted but the negative cache should take the place. Maybe I do not understand the concept of negative caching well Many thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Low priority question: negative cache in autofs5
Hi Ian, Thanks for the quick reply. And what is the answer to the first question please (regarding lstat wildcard entry)? :-) It is not he same case I guess Many thanks, Ondrej Ian Kent wrote: On Fri, 2008-12-05 at 12:30 +0100, Ondrej Valousek wrote: Probably to Ian: I am just looking in the autofs5 (from RHEL5) source. I have an indirect nis map: auto.master: /appli auto.appli /proj auto.proj Now, auto.proj contains a wildcard entry (* nfsserver:/share/) but auto.appli does not. 1. Every time I try to enter (shortly after each other) /proj/nonexistent_directory, I got this: Dec 5 12:10:45 ara automount[3101]: rmdir_path: lstat of /proj/nonexistent_directory failed ... why did not the negative cache save us the expensive call to lstat? 2. Every time I try to enter (shortly after each other) /appli/nonexistent_directory, I got this: Dec 5 12:11:41 ara automount[3101]: lookup_mount: lookup(yp): key nonexistent not found in map ... again, looks like the nis daemon was consulted but the negative cache should take the place. Known bug with v5 handling of negative caching of non-existent keys. I'm working on it but it is proving a little bit tricky because of possible multiple map sources listed in nsswitch.conf (at least that's where I'm currently at). Maybe I do not understand the concept of negative caching well No .. you got it right .. I missed it. Another thing, there are quite a few changes going into RHEL-5.3 autofs which will make their way into RHEL-4.8 in due course. Hopefully you will not get caught by them but it is worth logging calls for issues in case you are seeing a known issue. Ian ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Low priority question: negative cache in autofs5
The negative cache working properly will return a fail immediately so you shouldn't see this. But then it shouldn't happen for the first case either. There are quite a few changes scheduled for the next update and I haven't seen this for a while now so it's a bit hard to get exited about it. Ok, I understand - in summary: #1 problem should go [hopefully] away with 5.3 #2 problem is known bug being worked on Thanks! O. ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] automounter segfaults when using negative cache?
Interesting thing: I have 2 identical login01 and login02 machines (Rhel4, full updates). login01 is running old kernel (2.6.9-42.ELsmp), login02 is running the latest one (2.6.9-78.0.8.ELsmp). Both have autofs debug enabled and are running the latest autofs (autofs-4.1.3-234). Now: 1) login01: [EMAIL PROTECTED] ondrejv]# cat /var/log/debug.log* | grep Dec 4 09:15:09 Dec 4 09:15:09 login01 automount[3939]: handle_packet: type = 0 Dec 4 09:15:09 login01 automount[3939]: handle_packet_missing: token 3533, name .raw_data Dec 4 09:15:09 login01 automount[3939]: attempting to mount entry /proj/.raw_data Dec 4 09:15:09 login01 automount[1649]: lookup(yp): looking up .raw_data Dec 4 09:15:09 login01 automount[3939]: mt-key set to .raw_data Dec 4 09:15:09 login01 kernel: automount[1649]: segfault at rip 002a95916be0 rsp 007fbfffd5c8 error 4 Dec 4 09:15:09 login01 automount[3939]: handle_child: got pid 1649, sig 1 (11), stat 0 Dec 4 09:15:09 login01 automount[3939]: sig_child: found pending iop pid 1649: signalled 1 (sig 11), exit status 0 Dec 4 09:15:09 login01 automount[3939]: send_fail: token=3533 Dec 4 09:15:09 login01 automount[3939]: handle_packet: type = 0 Dec 4 09:15:09 login01 automount[3939]: handle_packet_missing: token 3534, name .raw_data Dec 4 09:15:09 login01 automount[3939]: attempting to mount entry /proj/.raw_data Dec 4 09:15:09 login01 automount[1650]: lookup(yp): looking up .raw_data Dec 4 09:15:09 login01 automount[3939]: mt-key set to .raw_data Dec 4 09:15:09 login01 kernel: automount[1650]: segfault at rip 002a95916be0 rsp 007fbfffd5c8 error 4 Note that .raw_data map does not exist in the yp map so it should go into the negative cache - which happens correctly on login02: ov 28 23:27:43 login01 automount[4201]: attempting to mount entry /proj/.raw_data Nov 28 23:27:43 login01 automount[13394]: lookup(yp): looking up .raw_data Nov 28 23:27:43 login01 automount[4201]: mt-key set to .raw_data Nov 28 23:27:43 login01 automount[13394]: lookup(yp): key .raw_data not found in map. Nov 28 23:27:43 login01 automount[13394]: failed to mount /proj/.raw_data Nov 28 23:27:43 login01 automount[13394]: umount_multi: path=/proj/.raw_data incl=1 Nov 28 23:27:43 login01 automount[4201]: handle_child: got pid 13394, sig 0 (0), stat 3 Nov 28 23:27:43 login01 automount[4201]: sig_child: found pending iop pid 13394: signalled 0 (sig 0), exit status 3 Nov 28 23:27:43 login01 automount[4201]: update_negative_cache: key: .raw_data Nov 28 23:27:43 login01 automount[4201]: Adding negative cache entry for key .raw_data Nov 28 23:27:43 login01 automount[4201]: Key .raw_data added to negative cache Nov 28 23:27:43 login01 automount[4201]: send_fail: token=15625 I thought all the negative caching is done in user space, but apparently kernel has some influence here Any explanation? Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Automounter hangs...
Seems that the expire is completing before the parent signals are restored. But I thought a signal that is sent while it is blocked (SIGCHLD in this case) is delivered once the signal is unblocked so this is a bit of a puzzle. And which game plays the process 18848 here - this is the first one to hang (looks like) Nov 20 15:02:39 login02 automount[18848]: lookup(yp): looking up .directory Nov 20 15:02:39 login02 automount[18848]: failed to mount /proj/.directory Nov 20 15:02:39 login02 automount[18848]: umount_multi: path=/proj/.directory incl=1 Nov 20 15:02:39 login02 automount[4125]: handle_child: got pid 18848, sig 0 (0), stat 1 Nov 20 15:02:39 login02 automount[4125]: sig_child: found pending iop pid 18848: signalled 0 (sig 0), exit status 1 Nov 21 15:07:55 login02 automount[18848]: lookup(yp): looking up .raw_data Nov 21 15:07:55 login02 automount[18848]: failed to mount /proj/.raw_data Nov 21 15:07:55 login02 automount[18848]: umount_multi: path=/proj/.raw_data incl=1 Nov 21 15:07:55 login02 automount[4125]: handle_child: got pid 18848, sig 0 (0), stat 1 Nov 21 15:07:55 login02 automount[4125]: sig_child: found pending iop pid 18848: signalled 0 (sig 0), exit status 1 Ondrej Hi All, I hoped this went away forever, but I was wrong (unfortunately). Here we go again: RHEL-4, full updates, autofs 4, automounter hangs: ps -ef | grep auto: root 3805 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /softappli yp auto.softappli -rw root 3880 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /cadappl yp auto.cadappl -rw root 3947 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /appli yp auto.appli -rw root 4032 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /proj yp auto.proj -rw root 4118 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /home yp auto.home -rw root 18848 4032 0 Nov27 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /proj yp auto.proj -rw root 18851 4032 0 Nov27 ?00:00:00 [automount] defunct root 28454 21820 0 15:25 pts/134 00:00:00 grep auto Debug logs: Nov 27 13:07:28 login02 automount[4032]: sig 14 switching from 1 to 2 Nov 27 13:07:28 login02 automount[4032]: get_pkt: state 1, next 2 Nov 27 13:07:28 login02 automount[4032]: st_expire(): state = 1 Nov 27 13:07:28 login02 automount[4032]: expire_proc: exp_proc=18848 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 2 Nov 27 13:07:28 login02 automount[4032]: handle_packet_expire_multi: token 7150, name towerip Nov 27 13:07:28 login02 automount[18849]: expiring path /proj/towerip Nov 27 13:07:28 login02 automount[18849]: umount_multi: path=/proj/towerip incl=1 Nov 27 13:07:28 login02 automount[18849]: umount_multi: unmounting dir=/proj/towerip Nov 27 13:07:28 login02 automount[18849]: expired /proj/towerip Nov 27 13:07:28 login02 automount[4032]: handle_child: got pid 18849, sig 0 (0), stat 0 Nov 27 13:07:28 login02 automount[4032]: sig_child: found pending iop pid 18849: signalled 0 (sig 0), exit status 0 Nov 27 13:07:28 login02 automount[4032]: send_ready: token=7150 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 2 Nov 27 13:07:28 login02 automount[4032]: handle_packet_expire_multi: token 7151, name pdld4 Nov 27 13:07:28 login02 automount[18851]: expiring path /proj/pdld4 Nov 27 13:07:28 login02 automount[18851]: umount_multi: path=/proj/pdld4 incl=1 Nov 27 13:07:28 login02 automount[18851]: umount_multi: unmounting dir=/proj/pdld4 Nov 27 13:07:28 login02 automount[18851]: expired /proj/pdld4 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 0 Nov 27 13:07:28 login02 automount[4032]: handle_packet_missing: token 7152, name towerip The automounter daemon handling the /proj map stalled. Please help. Thanks, Ondrej Ondrej Valousek wrote: Hi Jeff, Yes I am trying to reproduce this with the debug enabled - it will take some time. Please stay tuned. Ondrej It rings a bell, but I can't put my finger on it. Can you reproduce this? If so, could you send along a debug log? Instructions for collecting debug information can be found at: http://people.redhat.com/~jmoyer/ Cheers, Jeff ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received
Re: [autofs] Automounter hangs...
To summarize: process 4032 - D (disk sleep) process 18848 - S (sleep, but does not react to kill) process 18841 - Z (zombie) O. Ondrej Valousek wrote: Seems that the expire is completing before the parent signals are restored. But I thought a signal that is sent while it is blocked (SIGCHLD in this case) is delivered once the signal is unblocked so this is a bit of a puzzle. And which game plays the process 18848 here - this is the first one to hang (looks like) Nov 20 15:02:39 login02 automount[18848]: lookup(yp): looking up .directory Nov 20 15:02:39 login02 automount[18848]: failed to mount /proj/.directory Nov 20 15:02:39 login02 automount[18848]: umount_multi: path=/proj/.directory incl=1 Nov 20 15:02:39 login02 automount[4125]: handle_child: got pid 18848, sig 0 (0), stat 1 Nov 20 15:02:39 login02 automount[4125]: sig_child: found pending iop pid 18848: signalled 0 (sig 0), exit status 1 Nov 21 15:07:55 login02 automount[18848]: lookup(yp): looking up .raw_data Nov 21 15:07:55 login02 automount[18848]: failed to mount /proj/.raw_data Nov 21 15:07:55 login02 automount[18848]: umount_multi: path=/proj/.raw_data incl=1 Nov 21 15:07:55 login02 automount[4125]: handle_child: got pid 18848, sig 0 (0), stat 1 Nov 21 15:07:55 login02 automount[4125]: sig_child: found pending iop pid 18848: signalled 0 (sig 0), exit status 1 Ondrej Hi All, I hoped this went away forever, but I was wrong (unfortunately). Here we go again: RHEL-4, full updates, autofs 4, automounter hangs: ps -ef | grep auto: root 3805 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /softappli yp auto.softappli -rw root 3880 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /cadappl yp auto.cadappl -rw root 3947 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /appli yp auto.appli -rw root 4032 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /proj yp auto.proj -rw root 4118 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /home yp auto.home -rw root 18848 4032 0 Nov27 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /proj yp auto.proj -rw root 18851 4032 0 Nov27 ?00:00:00 [automount] defunct root 28454 21820 0 15:25 pts/134 00:00:00 grep auto Debug logs: Nov 27 13:07:28 login02 automount[4032]: sig 14 switching from 1 to 2 Nov 27 13:07:28 login02 automount[4032]: get_pkt: state 1, next 2 Nov 27 13:07:28 login02 automount[4032]: st_expire(): state = 1 Nov 27 13:07:28 login02 automount[4032]: expire_proc: exp_proc=18848 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 2 Nov 27 13:07:28 login02 automount[4032]: handle_packet_expire_multi: token 7150, name towerip Nov 27 13:07:28 login02 automount[18849]: expiring path /proj/towerip Nov 27 13:07:28 login02 automount[18849]: umount_multi: path=/proj/towerip incl=1 Nov 27 13:07:28 login02 automount[18849]: umount_multi: unmounting dir=/proj/towerip Nov 27 13:07:28 login02 automount[18849]: expired /proj/towerip Nov 27 13:07:28 login02 automount[4032]: handle_child: got pid 18849, sig 0 (0), stat 0 Nov 27 13:07:28 login02 automount[4032]: sig_child: found pending iop pid 18849: signalled 0 (sig 0), exit status 0 Nov 27 13:07:28 login02 automount[4032]: send_ready: token=7150 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 2 Nov 27 13:07:28 login02 automount[4032]: handle_packet_expire_multi: token 7151, name pdld4 Nov 27 13:07:28 login02 automount[18851]: expiring path /proj/pdld4 Nov 27 13:07:28 login02 automount[18851]: umount_multi: path=/proj/pdld4 incl=1 Nov 27 13:07:28 login02 automount[18851]: umount_multi: unmounting dir=/proj/pdld4 Nov 27 13:07:28 login02 automount[18851]: expired /proj/pdld4 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 0 Nov 27 13:07:28 login02 automount[4032]: handle_packet_missing: token 7152, name towerip The automounter daemon handling the /proj map stalled. Please help. Thanks, Ondrej Ondrej Valousek wrote: Hi Jeff, Yes I am trying to reproduce this with the debug enabled - it will take some time. Please stay tuned. Ondrej It rings a bell, but I can't put my finger on it. Can you reproduce this? If so, could you send along a debug log? Instructions for collecting debug information can be found at: http://people.redhat.com/~jmoyer/ Cheers, Jeff ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs The information contained in this e-mail and in any attachments
Re: [autofs] Automounter hangs...
Hi All, I hoped this went away forever, but I was wrong (unfortunately). Here we go again: RHEL-4, full updates, autofs 4, automounter hangs: ps -ef | grep auto: root 3805 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /softappli yp auto.softappli -rw root 3880 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /cadappl yp auto.cadappl -rw root 3947 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /appli yp auto.appli -rw root 4032 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /proj yp auto.proj -rw root 4118 1 0 Nov21 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /home yp auto.home -rw root 18848 4032 0 Nov27 ?00:00:00 /usr/sbin/automount --timeout=3600 --debug --use-old-ldap-lookup /proj yp auto.proj -rw root 18851 4032 0 Nov27 ?00:00:00 [automount] defunct root 28454 21820 0 15:25 pts/134 00:00:00 grep auto Debug logs: Nov 27 13:07:28 login02 automount[4032]: sig 14 switching from 1 to 2 Nov 27 13:07:28 login02 automount[4032]: get_pkt: state 1, next 2 Nov 27 13:07:28 login02 automount[4032]: st_expire(): state = 1 Nov 27 13:07:28 login02 automount[4032]: expire_proc: exp_proc=18848 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 2 Nov 27 13:07:28 login02 automount[4032]: handle_packet_expire_multi: token 7150, name towerip Nov 27 13:07:28 login02 automount[18849]: expiring path /proj/towerip Nov 27 13:07:28 login02 automount[18849]: umount_multi: path=/proj/towerip incl=1 Nov 27 13:07:28 login02 automount[18849]: umount_multi: unmounting dir=/proj/towerip Nov 27 13:07:28 login02 automount[18849]: expired /proj/towerip Nov 27 13:07:28 login02 automount[4032]: handle_child: got pid 18849, sig 0 (0), stat 0 Nov 27 13:07:28 login02 automount[4032]: sig_child: found pending iop pid 18849: signalled 0 (sig 0), exit status 0 Nov 27 13:07:28 login02 automount[4032]: send_ready: token=7150 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 2 Nov 27 13:07:28 login02 automount[4032]: handle_packet_expire_multi: token 7151, name pdld4 Nov 27 13:07:28 login02 automount[18851]: expiring path /proj/pdld4 Nov 27 13:07:28 login02 automount[18851]: umount_multi: path=/proj/pdld4 incl=1 Nov 27 13:07:28 login02 automount[18851]: umount_multi: unmounting dir=/proj/pdld4 Nov 27 13:07:28 login02 automount[18851]: expired /proj/pdld4 Nov 27 13:07:28 login02 automount[4032]: handle_packet: type = 0 Nov 27 13:07:28 login02 automount[4032]: handle_packet_missing: token 7152, name towerip The automounter daemon handling the /proj map stalled. Please help. Thanks, Ondrej Ondrej Valousek wrote: Hi Jeff, Yes I am trying to reproduce this with the debug enabled - it will take some time. Please stay tuned. Ondrej It rings a bell, but I can't put my finger on it. Can you reproduce this? If so, could you send along a debug log? Instructions for collecting debug information can be found at: http://people.redhat.com/~jmoyer/ Cheers, Jeff ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Ubuntu NFS automount problem
Ian, So what is the conclusion in terms of NFSv4 compatibility? Will the -hosts map work in autofs5? Thanks, Ondrej Ian Kent wrote: On Mon, 2008-11-24 at 12:31 -0800, Bill Shannon wrote: Peter Staubach wrote: Bill, why aren't you using the -hosts map? That's a good question... I don't remember how I got to this point, but ... I just tried using -hosts, but it doesn't seem to be working: vostro# cat /etc/auto.master # $Id: auto.master,v 1.2 1997/10/06 21:52:03 hpa Exp $ # Sample auto.master file # Format of this file: # mountpoint map options # For details of the format look at autofs(8). # /misc /etc/auto.misc --timeout=60 /home/etc/auto.home -nosuid #/net/etc/auto.net -nosuid /net -hosts -nosuid #/- /etc/auto.direct-nosuid vostro# /etc/init.d/autofs reload Reloading automounter: checking for changes ... Reloading automounter map for: /home Stopping automounter for: /smb vostro# ls /net/nissan ls: cannot access /net/nissan: No such file or directory vostro# /etc/init.d/autofs restart Stopping automounter: Couldn't stop automount for /home done. Starting automounter: failed to start automount point /home done. vostro# /etc/init.d/autofs reload Reloading automounter: checking for changes ... Reloading automounter map for: /home vostro# ls /net/nissan ls: cannot access /net/nissan: No such file or directory vostro# grep nissan /etc/hosts 192.168.1.2 nissan I think that's the reason I ended up with an auto.net map, although it now seems likely that it never worked as intended. I started chasing this problem because my auto.home map wasn't working, which I now understand is the expected behavior based on the way I'm now exporting my /export/home filesystems. My mistake. I just fixed my auto.home map using multimounts. Still, any hints as to why the -hosts pseudo-map isn't working would be appreciated. Are you sure it's *supposed* to work? It's not documented in the man page. I think this is the version of autofs I have: vostro# dpkg-query -l 'autofs*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Installed/Config-f/Unpacked/Failed-cfg/Half-inst/t-aWait/T-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name VersionDescription +++-==-==- ii autofs 4.1.4+debian-2 kernel-based automounter for Linux Yep, that's version 4. Is there an autofs5 package I should install instead? I'm not sure. There is an autofs5 in the package pool on the mirrors but I can't tell which release it belongs to. Perhaps an apt-cache search will give the answer. Ian ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Ubuntu NFS automount problem
I don't think so, not the hosts map, since the export path doesn't match the mount location path. There are other difficulties as well such as having to modify paths in the export list as we go. However, exports with nohide option should be automounted by the kernel client and not confuse autofs so that part is ok with 5.0.4 or if the patch which deals with this has been applied to 5.0.3. This is something I need to work on. So since the nohide option is a must for the Linux-based NFSv4 server, it will work (for most of the people)... Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Caching negative lookups
I do not understand it - I have heard that using indirect maps can cause unwanted NFS chattering on the network and that negative lookups are supposed to handle this. But this chattering I can only imagine when using wildcards like /home auto.home with auto.home containing something like: * nfsserver:/vol/vol0/users/ This way, if an application check for the existence of say /home/file - the automounter must ask nfsserver for existence of this file every time the application asks. But if there are no wildcards in the indirect map and all valid entries are explicitly listed, no nfs chattering occurs as autofs knows directly that the mount attempt for /home/file is invalid as there is no file record in the auto.home map. Am I right? If yes, that would be a serious argument against using wildcards in the automount maps. Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] expire_indirect errors in the syslog
Ok, I have removed all nohide options from my exports, but I am still experiencing the expire_indirect errors. I must also say that I am also using NetApp NAS and one share is nfs4 mounted. But I do not believe it is NetApp/nfs4 related. Question: Is it possible that I receive this errors when: 1. say I have autofs key like share nfsserver:/exports/share 2. some processes start using the share 3. I delete the key above and delete the share from the NFS server 4. As the share no longer exists but some processes are still using it, automounter can not unmount it, so I manually invoke umount -l to remove it. Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Read-only map
I tried on RHEL 5.2 and it works, too. But it still does not work with the automounter - still does the mount --bind (probably) Ondrej Steve Linn wrote: Interesting. I tried it on a Suse 9,3, Open Suse 10.3 kernel and an earlier RHEL and they failed. Opensuse 11 works though. [EMAIL PROTECTED] ~]$ mount -o ro lid40:/etc /mnt/test1 [EMAIL PROTECTED] ~]$ mount -o rw lid40:/etc /mnt/test2 [EMAIL PROTECTED] ~]$ touch /mnt/test2/passwd touch: cannot touch `/mnt/test2/passwd': Read-only file system [EMAIL PROTECTED] ~]$ cat /proc/version Linux version 2.6.9-34.0.1.EL.ADSKsmp ([EMAIL PROTECTED]) (gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)) #1 SMP Tue Aug 12 21:24:17 EDT 2008 [EMAIL PROTECTED] ~]$ cat /etc/redhat-release Red Hat Enterprise Linux WS release 4 (Nahant Update 3) suse11build /root# mount -o ro lid40:/etc /mnt/test1 suse11build /root# mount -o rw lid40:/etc /mnt/test2 suse11build /root# touch /mnt/test2/passwd suse11build /root# cat /proc/version Linux version 2.6.25.9-0.2-pae ([EMAIL PROTECTED]) (gcc version 4.3.1 20080507 (prerelease) [gcc-4_3-branch revision 135036] (SUSE Linux) ) #1 SMP 2008-06-28 00:00:07 +0200 suse11build /root# cat /etc/SuSE-release openSUSE 11.0 (i586) VERSION = 11.0 Sorry about that, Steve On Thu, Oct 30, 2008 at 6:32 AM, Jeff Moyer [EMAIL PROTECTED] wrote: Steve Linn [EMAIL PROTECTED] writes: We had the same problem. The kernel we were running had a feature where if the client mounts the same server and directory it would bind them together. You would get RO or RW depending The RHEL 5.2 kernel does not have that issue. Cheers, Jeff ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] Read-only map
Hi List, I have a problem: My system (client, server) RHEL 5.2, latest updates. 1. NFS server is exporting read/write share 2. NIS automount map point to that share, but in read only like key -ro nfsserver:/share/key Now, the strange thing is that on clients that share is mounted read only, fine, but when I go to the nfsserver itself, the share is mounted read write. I understand this is because mount --bind is used instead of nfs for performance reasons, but this way we lose the -ro flag. Is there any simple solution to this? Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
[autofs] expire_indirect errors in the syslog
Hi all, I am seeing quite lot of following messages in the syslog: Oct 21 02:17:05 ara automount[3181]: expire_indirect: fstat failed: Bad file descriptor On RHEL-5.2, last updates, autofs-5.0.1-0.rc2.88. I have seen some discussion on the net (bug #448038) regarding this error and what is common is, that my NFS server is using the nohide option. Question - is this bug dangerous or not? Is there any resolution to this? Recently I have experienced our logon server instability (Gnome session frozen) due to a problem with gnome-vfs so I do not know if this is related Thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Autofs v5 (RHEL4) kernel panics
As of me, RHEL4 ships with autofs 4xxx series. My system says autofs-4.1.3-234. The autofs5-5.0.1-0.rc2.88 is a part of RHEL5. If I were you I would go for autofs which is shipped with the OS. Ondrej Coe, Colin C. (Unix Engineer) wrote: Hi Ian and all Using RHEL4 WS with the latest kernel (kernel-smp-2.6.9-78.0.5.EL) and autofs (autofs5-5.0.1-0.rc2.88) rpms, I can reliably crash (read kernel panic) the OS. To crash, 1) open a couple of terminals (I logged on to tty1 and tty2) 2) in one terminal run while :; do ls -lR /autofs/point/mount/* ; done 3) in the other run while :; do service autofs5 restart; done 4) profit! While this is a contrived scenario we noticed the panic on a couple of cluster nodes (run RHEL 4u7 WS) when they didn't pick up some new autofs map entries. Service autofs5 reload didn't help so I did a service autofs5 restart and panics soon followed. I've setup diskdump and now have a vmcore file. Running 'crash /boot/System.map-2.6.9-78.0.5.ELsmp /usr/lib/debug/lib/modules/2.6.9-78.0.5.ELsmp/vmlinux /var/crash/127.0.0.1-2008-10-20-09\:44/vmcore' shows: --- crash 4.0-5.0.0.1 Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008 Red Hat, Inc. Copyright (C) 2004, 2005, 2006 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter help copying to see the conditions. This program has absolutely no warranty. Enter help warranty for details. GNU gdb 6.1 Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type show copying to see the conditions. There is absolutely no warranty for GDB. Type show warranty for details. This GDB was configured as x86_64-unknown-linux-gnu... SYSTEM MAP: /boot/System.map-2.6.9-78.0.5.ELsmp DEBUG KERNEL: /usr/lib/debug/lib/modules/2.6.9-78.0.5.ELsmp/vmlinux (2.6.9-78.0.5.ELsmp) DUMPFILE: /var/crash/127.0.0.1-2008-10-20-09:44/vmcore [PARTIAL DUMP] CPUS: 2 DATE: Mon Oct 20 09:44:24 2008 UPTIME: 00:04:26 LOAD AVERAGE: 0.24, 0.22, 0.10 TASKS: 95 NODENAME: lws075 RELEASE: 2.6.9-78.0.5.ELsmp VERSION: #1 SMP Wed Sep 24 05:40:24 EDT 2008 MACHINE: x86_64 (2133 Mhz) MEMORY: 4 GB PANIC: Oops: 0002 [1] SMP (check log for details) PID: 23840 COMMAND: automount5 TASK: 101205e1030 [THREAD_INFO: 1011a692000] CPU: 0 STATE: TASK_RUNNING (PANIC) crash bt PID: 23840 TASK: 101205e1030 CPU: 0 COMMAND: automount5 #0 [1011a693d20] start_disk_dump at a01b1377 #1 [1011a693d50] try_crashdump at 8014cda9 #2 [1011a693d60] do_page_fault at 80124ae8 #3 [1011a693e40] error_exit at 80110e1d [exception RIP: fput] RIP: 8017ca06 RSP: 01011a693ef0 RFLAGS: 00010246 RAX: 0002 RBX: 9362 RCX: RDX: 9362 RSI: 0101198b8bc0 RDI: RBP: R8: R9: R10: R11: 9362 R12: 0101199218c0 R13: 0101198b8bc0 R14: 010119837270 R15: 0003 ORIG_RAX: CS: 0010 SS: 0018 #4 [1011a693ef0] autofs4_catatonic_mode at a0a45df2 #5 [1011a693f10] autofs4_root_ioctl at a0a45b74 #6 [1011a693f40] sys_ioctl at 8018dc6d #7 [1011a693f80] system_call at 801102f6 RIP: 002a95946f59 RSP: 40621a98 RFLAGS: 0246 RAX: 0010 RBX: 801102f6 RCX: 002a95677486 RDX: RSI: 9362 RDI: 0003 RBP: 00552abfc930 R8: R9: R10: R11: 0246 R12: 00552abfbe10 R13: 00552abfbe10 R14: R15: 00552ac17390 ORIG_RAX: 0010 CS: 0033 SS: 002b crash ps 23840 PIDPPID CPU TASKST %MEM VSZRSS COMM 23840 1 0 101205e1030RU 0.0 23036 1820 automount5 Crash log Bootdata ok (command line is ro root=LABEL=/ quiet reboot=h ide=nodma vga=1 bfsort swiotlb=65536) Linux version 2.6.9-78.0.5.ELsmp ([EMAIL PROTECTED]) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-10)) #1 SMP Wed Sep 24 05:40:24 EDT 2 008 BIOS-provided physical RAM map: snip MSI INIT SUCCESS tg3: eth0: Link is up at 1000 Mbps, full
[autofs] autofs system libnss* libraries
Hi List, I have a question - is there any plan for autofs to use the system libnss* libraries instead of implementing their own? For example: If I wanted to use LDAP server to hold user accounts autofs maps, I would have 2 config files to configure: - /etc/ldap.conf for libnss_ldap library - /etc/autofs_ldap_auth.conf for automounter It is quite confusing and it will be much more elegant to have just a single configuration file for both services. Is there any plan to do this in the future? Many thanks, Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] autofs system libnss* libraries
No! I considered that at the outset of version 5 development and decided against it after working on integrating the outdated code that was included in the nss_ldap distribution. Unless the situation changes significantly then I'm not likely to change my mind on this. Does it mean that the nss_ldap is heavily outdated then? I would have to write the nss code for all the possible sources against a an API that is difficult to write for, partly because the interface documentation is lousy. Not to mention that I'd then be at the mercy of nss_ldap changes and bugs, and autofs would depend on a configuration file that it doesn't control. My primary concern was why should we (linux distro maintainers) support 2 things essentially doing the same? I did not mean you specifically. Maintaining the libnss* libraries should be (probably) job for someone else - you keep focused on the autofs-specific tasks. And if you think your nss_ldap is better, why should not it serve other purposes (like gathering user info from LDAP repository), too? I mean, from the longer perspective, I believe we should merge these things. It is neither elegant nor transparent for normal sysadmins. Ian Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] 'browse mode' issue
Nice information about the CPU involvement here. I used to use (and I am still using) the browse to yes and it is much less confusing for users I must say (never mind that you do not see the directory, just cd into it anyway...), but then I introduced the wild cards to reduce size of my maps. And obviously, the browse mode stopped working (as it can not cover all possibilities the wild card gives). But there is nothing we can do with, I guess.. Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Automounter hangs...
Hi Jeff, Yes I am trying to reproduce this with the debug enabled - it will take some time. Please stay tuned. Ondrej It rings a bell, but I can't put my finger on it. Can you reproduce this? If so, could you send along a debug log? Instructions for collecting debug information can be found at: http://people.redhat.com/~jmoyer/ Cheers, Jeff ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] Automounter losing track of mounts...
We had a related issue quite a bit and /etc/mtab and /proc/mounts went out of sync. Right now we are symlinking /etc/mtab to /proc/mounts. It works, but I don't know if that's the best solution for it. Yes, we could do that and that seems to be a canonical solution that is even talked about in the 'man' pages. But, there may be a problem with that. Have you seen any cases where different processes are specifically writing either to /etc/mtab _or_ /proc/mounts in an asynchronous manner? Since it would now be the same file due to it being symlinked, what would that sort of write access do to the integrity of the mount tables? Would it even matter? 1. You can not write to /proc/mounts as it is read only 2. mount.nfs is broken (bug already filed) the way that it can not handle symlink /etc/mtab - /proc/mounts properly. 3. I believe content handling of /proc/mounts is the kernel's job. If it does not do it well, it should be patched - we should not patch automounter as it is not its responsibility. The similar with /etc/mtab - it is the mount responsibility to handle it. HTH Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs
Re: [autofs] How to pass -n option to the mount command via automounter?
From what I see in the source for automounter, the answer is no :-( I even tried automount -O rw -n hack - should work but it does not. In the spawn.c file I see comment that autofs ver. 5 needs to have /etc/mtab file updated. This is not a problem as most diskless environments have /etc/mtab pointed to /proc/mounts. I believe this is a bug. Can we have it fixed? (for example detect if /etc/mytab is a symlink and if it is, pass the -n option to the mount command automagically) Thanks, Ondrej Ondrej Valousek wrote: Hi List, I am wondering - I have diskless systems deployed here with /etc being read-only - I need to pass -n option to the mount command in order to make it working. I tried: automount -O -n but it passes the -n option after -o, like this: mount_mount: mount(nfs): calling mount -t nfs -s -o -n hercules:/ext3/tmp /proj/tmp I need the -n option to be passed before -o. How could I do it? Many thanks for your help. Ondrej ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs ___ autofs mailing list autofs@linux.kernel.org http://linux.kernel.org/mailman/listinfo/autofs