looks good for commit.

thanks for the work!

-steve
On Wed, 2008-08-20 at 06:55 +1200, angus salkeld wrote:
> Cause:
> As part of its exit procedure, ais cancels its worker thread then manually
> processes any outstanding items that were still in the worker thread's queue.
> The worker thread has a low priority so normally it does not execute any
> further before ais finishes exiting, but if the main thread's exiting is
> delayed for any reason, there is a chance the worker thread could execute and
> try to process items which have already been processed and freed by the main
> thread - often leading to the worker thread seeing NULL data and ultimately
> causing a segmentation fault.
> 
> Fix:
> Modified worker_thread_group_exit() so it does a pthread_join() after the
> pthread_cancel() call, so that the worker thread always shuts down cleanly
> before the main thread does its cleanup.
> 
> Author: Author: Mark McKinstry <[EMAIL PROTECTED]>
> ---
>  exec/wthread.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/exec/wthread.c b/exec/wthread.c
> index 6a434fd..e97f917 100644
> --- a/exec/wthread.c
> +++ b/exec/wthread.c
> @@ -183,6 +183,11 @@ void worker_thread_group_exit (
>  
>       for (i = 0; i < worker_thread_group->threadcount; i++) {
>               pthread_cancel (worker_thread_group->threads[i].thread_id);
> +
> +             /* Wait for worker thread to exit gracefully before destroying
> +              * mutexes and processing items in the queue etc.
> +              */
> +             pthread_join (worker_thread_group->threads[i].thread_id, NULL);
>               pthread_mutex_destroy 
> (&worker_thread_group->threads[i].new_work_mutex);
>               pthread_cond_destroy 
> (&worker_thread_group->threads[i].new_work_cond);
>               pthread_mutex_destroy 
> (&worker_thread_group->threads[i].done_work_mutex);

_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais

Reply via email to