On Sat, 22 Feb 2003, Keith Whitwell wrote: > > What about processes that *don't* do a close - that just use an fd and exit.
The exit does a close, and you'll see a flush() from the dying process (and a release() if that was the last user). > In the threaded demo I'm looking at, there is only one close -- in the same > thread that did the open. The other threads just die. If a thread does a close(), then that will have closed it in the other threads too. You will not get any notification from the other threads, since they will never have anything to notify about - the file is already dead as far as they are concerned. > But sadly for me, the thread that does the open isn't the one that sees the > flush() or release() events: Whoever does the close() generates the flush(). And the release() can be generated by anybody, although in practice it's going to be the same one that did the final flush() in 99.99% of all cases (but if you depend on it, you're just asking for trouble). > The last line indicates that 1063 held the lock. I never see a flush() for > that pid. The answer really is that you shouldn't care about the pid at all. > > NOTE! In general it's a design mistake to care about things like > > "current->pid" etc. What are the semantics for the file descriptor > > for fork'ed processes, or for threads? > > Yes, it looks like this is a big mistake throughout the drm code. The > assumption seems to be that pids are equivalent to GL contexts - there are > heaps of issues I can see with this... I'd suggest associating the "struct file_struct *" with the GL context, and nothing else. At that point you would always get the right answer by just knowing that when the release() happens, the GL context is gone. > We do still need some sort of notification of process/fd/GLcontext death to > clean up resources owned by that entity - the flush() (and previously > release()) methods seemed to do a reasonable job at least for singlethreaded > apps -- what would replace these notifications? "flush()" is almost always the wrong thing to use. It's really fundamentally designed to do the one thing that the name implies: flush a pending stream of data, so that when you do a "close()" on a file descriptor the close() can return an error if something bad happened for deferred IO completion. What you are looking for really is "release()", since that is the thing that tells you "this file descriptor no longer exists". However, that does require that you don't care about who does the release etc. Also note that the release() will always be delayed until _all_ users have gone away. This means that if you do a fork() in a GL program, and the child keeps the file descriptor open, the release will not happen until the child has closed it - even if the original user long since destroyed its GL context and closed the original file descriptor. This may be what you want, but if not you should make sure to close the GL file descriptors after forks (marking them close-on-exec is also a good idea, I think you already do that). Linus ------------------------------------------------------- This SF.net email is sponsored by: SlickEdit Inc. Develop an edge. The most comprehensive and flexible code editor you can use. Code faster. C/C++, C#, Java, HTML, XML, many more. FREE 30-Day Trial. www.slickedit.com/sourceforge _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel