To comment on the following update, log in, then open the issue: http://www.openoffice.org/issues/show_bug.cgi?id=107490 Issue #|107490 Summary|cppu: various crashes during exit of remote uno bridge | process Component|udk Version|OOO320m7 Platform|All URL| OS/Version|Linux Status|NEW Status whiteboard| Keywords| Resolution| Issue type|PATCH Priority|P3 Subcomponent|code Assigned to|kr Reported by|cmc
------- Additional comments from [email protected] Mon Dec 7 10:53:57 +0000 2009 ------- a) uno_initEnvironment in bridges/source/remote/urp/urp_environment.cxx spawns off two threads, a reader and a writer. If we take the writer thread as the simpler case of the two, then OWriterThread::run runs until m_bAbort is true. m_bAbort is only true when OWriterThread::abortThread is called. Only RemoteEnvironment_thisDispose calls abortThread In the case of a remote uno-bridge using process, such as a command line python script that connects to a OOo server then this remote uno-environment will belong to the EnvironmentsData singleton in cppu/source/uno/lbenv.cxx That singleton's dtor will be called on exit (or dlclose). Due to the CPPU_LEAK_STATIC_DATA define we never call dispose on the bridge. So there is no route to call dispose on the remote bridge, and no way that the thread can be shutdown. So it's not just a leak of static data, but a failure to shut down threads. The threads remain blocked on mutexes that end up getting deleted. Threads enter after the mutexes go away when the main process exits, everything is mush at this stage so various crashes happen randomly in the still-alive threads. b) So, assuming for the moment that there's no way to avoid the reader/writer threading issues except to call dispose on them during shutdown then we have some other issues. We have a DisposedCallerAdmin, a ThreadPool and a DisposedCallerAdmin singleton. They are destroyed correctly in the reverse order of creation during shutdown. The EnvironmentsData singleton is created first, then the ThreadPool, then the DisposedCallerAdmin, so dtor order is DisposedCallerAdmin, ThreadPool and EnvironmentsData. b.1) The ThreadPool uses the DisposedCallerAdmin, but the DisposedCallerAdmin will be destroyed before it is. b.2) The remote uno environment's dispose that should be called from the DisposedCallerAdmin dtor uses the ThreadPool and DisposedCallerAdmin. But at the time of the EnvironmentsData dtor they have already been destroyed. c) The remote uno-environments's dispose wants to re-enter EnvironmentsData during dispose. So.. attached is a patch which removes the -DCPPU_LEAK_STATIC_DATA from cppu to make sure that the remote uno-env reader/writer threads get a chance to exit, and then uses some boost::shared_ptr stuff to ensure a sane dtor-ordering cycle on the dependencies. i.e. theThreadPoolHolder holds a reference on theDisposedCallerAdminHolder, so the threadpool can be guaranteed that thedisposedcalleradmin exists. Classes like ORequestThread also hold a reference on theThreadPoolHolder. When we put a thread into our g_pThreadpoolHashSet, then we add a reference to the theThreadPoolHolder in there as well, so the threadpool itself exists until all its threads exit. In the patch, I just went for the simple solution of making it a hash_set of references to the holder. for (c) I'm not at sure that it makes complete sense to hold a mutex in the EnvironmentsData::~EnvironmentsData as attempt to reenter EnvironmentsData from any dispose connected calls block (the reader thread does this). But for simplicity here I just went for a isDisposing boolean and returned early from the offending method before attempting to acquire the offending mutex when its set. Alternatively removing the mutexguard from the dtor might be a solution. https://bugzilla.redhat.com/show_bug.cgi?id=541876 has the practical bug details of crashes during exit of a sample python script --------------------------------------------------------------------- Please do not reply to this automatically generated notification from Issue Tracker. Please log onto the website and enter your comments. http://qa.openoffice.org/issue_handling/project_issues.html#notification --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
