On 2009-10-29, Lino Sanfilippo <lino.sanfili...@avira.com> wrote: >> When process already has the maximum number of files open, both >> applications - user and scanning hung up and it was impossible to >> debug it and even killed (kill -9) them. Only computer shutdown >> helped me to destroy hunged processes. > > The reason for this behavior is that dazukofs tries to open a file > descriptor for the daemon that it assigns an event to. If this > assignment fails (in this case due to the failure of allocating a > further file descriptor) dazukofs tries again and again, hanging in > an endless loop. And yes, IMO its a bug: dazukofs should at least > report errors like this to userspace, so that the userspace > application has the possibiltiy to react somehow (probably > terminating the daemon that received the error).
Hmmm. If the too many files are open because there is a huge queue waiting to be processed the daemon, we could easily handle this by sleeping until the queue gets smaller. However, if the process has too many open files because it has opened files on its own, then DazukoFS has little chance here. I agree that an endless loop of retries is bad, but I am not clear on a good solution. I suppose DazukoFS could sleep on a timeout and retry once a second or so (while logging the problems to syslog). That would at least give the user an idea of what the problem is. Any other ideas? > Similar problems may occur if you use dazukofs mounted on a > rootsquashed nfs, on which dazuko does not have the needed rights to > open the files for which it wants to report an access to userspace. This is interesting. (I am hearing this for the first time.) Since the user process also would get an access denied, it is probably appropriate to deny the access in such a case. Agreed? John Ogness -- Dazuko Maintainer _______________________________________________ Dazuko-help mailing list Dazuko-help@nongnu.org http://lists.nongnu.org/mailman/listinfo/dazuko-help