Hi J.R. Okajima: Thank you for your excellent and insightful response . We are now seriously evaluating your suggestion "to stop using aufs1 and try new kernel and aufs3". If we move to what you suggested, the impact could be huge - effort wise. That is why the hesitation. In any case, I will inform you about what we finally decide.
In the mean time I have question about the below mentioned hang with aufs1 and linux kernel 2.6.18: Shortly before I ran the script a second time to cause the hang I had done a "pwd" operation in an earlier aufs1 mounted sub-directory i.e. /tmp/magic/dest. This is when it displayed the below mentioned error message: ------------------------------begin-------------------------- > pwd: failed to stat `.': Stale NFS file handle -------------------------------end--------------------------- My confusion is why was there a mention of a "Stale NFS file handle". There was no NFS directories involved. For example, when I now run a "similar" script on a now fully rebooted and recovered system (VM, please ignore the different VM name, it is the same VM server) I do not see any NFS mounts under the /tmp directory. Granted I have not created the sub-directories. But even if I did, it would not have anything to do with NFS. So why mention of NFS? Following is what I did to determine the actual types of the various directories involved: ----------------------------------------------begin--------------------------------------------------------------- vopenstack08.cisco.com:39> mount | grep tmp /dev/vda5 on /tmp type ext3 (rw) /dev/vda6 on /var/tmp type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /tmp/magic type aufs (rw,si=ffff81040a495000,xino=/tmp/uchange/.aufs.xino,br:/tmp/uchange=rw:/tmp/nbuild=ro) vopenstack08.cisco.com:40> vopenstack08.cisco.com:40> vopenstack08.cisco.com:40> vopenstack08.cisco.com:40> vopenstack08.cisco.com:40> vopenstack08.cisco.com:40> cat /etc/fstab | grep /tmp LABEL=/tmp /tmp ext3 defaults 1 2 LABEL=/var/tmp /var/tmp ext3 defaults 1 2 vopenstack08.cisco.com:41> -------------------------------------------------------------------- end ------------------------------------------------------------------- Please note the file system type of the /tmp directory is ext3 and not NFS. I had cd'd to /tmp/magic/dest in the earlier test. But neither /tmp/uchange nor /tmp/nbuild are NFS directories. Neither is /tmp/magic/ NFS. So again back to my original question. Why the mention of NFS in the error message? Any guesses? Thank You Haider -----Original Message----- From: sf...@users.sourceforge.net [mailto:sf...@users.sourceforge.net] Sent: Tuesday, April 29, 2014 7:00 PM To: Haider Khan -X (haidekha - TATA CONSULTANCY SERVICES LIMITED at Cisco) Cc: aufs-users@lists.sourceforge.net Subject: Re: request to know if there are any known serious issues with using aufs 1 "Haider Khan -X (haidekha - TATA CONSULTANCY SERVICES LIMITED at Cisco)": > In the meantime I have run into another issue with aufs1 testing which > I th= ought I would give you a heads up on and also request for your > help. That = is while I am trying to reproduce that issue and > investigate it further on = my own in parallel. Aufs1 has been unmaintained for many years and will be forever. > ---------------------------begin---------------------------------- > vopenstack07.cisco.com:50> cd /tmp/magic/dest > vopenstack07.cisco.com:51> pwd /tmp/magic/dest > vopenstack07.cisco.com:52> pwd > pwd: failed to stat `.': Stale NFS file handle > vopenstack07.cisco.com:53> cd / vopenstack07.cisco.com:54> > ---------------------------- end------------------------------- > > Quite likely my script had removed /tmp/magic/dest. So what is > puzzling is= why the first "pwd" worked and the second "pwd" failed. If I remember correctly, ESTALE is a past problem. I don't remember the details, but it might be releated to VFS particularly dcache. I'd strongly sugget you to stop using aufs1 and try new kernel and aufs3. J. R. Okajima ------------------------------------------------------------------------------ "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available. Simple to use. Nothing to install. Get started now for free." http://p.sf.net/sfu/SauceLabs