Frank Filz [ffilz...@mindspring.com] wrote:
> 
> > Can't we somehow use pthread_cleanup_push() for this purpose?
> > Of course, having our own cleanup handler would be flexible, but wondering
> > if we really need this.
> > 
> > Regards, Malahal.
> 
> I looked at that, but no, pthread_cleanup_push() needs to be followed by
> pthread_cleanup_pop()...
> 
> Frank
> 

I see it now! That is silly, looks like push/pop are meant to be done within a
single function.

Regards, Malahal.

> > Dirk Jagdmann [d...@cubic.org] wrote:
> > > So I've implemented a minimal solution that works for my use case.
> > > First this is the new public API, which can be used to register a
> > > thread cleanup callback
> > > function:
> > >
> > > /**
> > >   * type of a callback function that will be called when a thread exits.
> > >   * Use the fridgethr_register_thread_cleanup_cb() function to register
> > >   * the callback function *for each thread* you want it called at
> > >   * thread exit.
> > >   */
> > > typedef void (*thread_cleanup_cb) (void*);
> > >
> > > /**
> > >   * register a callback function that will be called when the current
> thread
> > exits.
> > >   *
> > >   * You can only register a callback function once per thread for each
> > >   * unique param value.  Subsequent calls of this function with the
> > >   * same value for cb and param will not register additional callbacks.
> > >   *
> > >   * @param cb pointer to callback function.
> > >   * @param param parameter that will be passed to cb.
> > >   */
> > > void fridgethr_register_thread_cleanup_cb(thread_cleanup_cb cb, void
> > > *param);
> > >
> > >
> > > This feature is implemented with a linked list with the nodes
> > > containing the thread_cleanup_cb pointer and the param pointer. Each
> > > thread maintains its own linked list with a thread local glist_head. A
> new
> > function:
> > >
> > > /**
> > >   * call every registered thread cleanup callback function.
> > >   */
> > > static void fridgethr_call_cleanup_cb();
> > >
> > > will traverse the linked list and call every registered cleanup
> > > function. This new fridgethr_call_cleanup_cb function is called in
> > > fridgethr_start_routine() before the thread_finalize function is called.
> > >
> > > I'll do some more testing during this week, to see that my
> > > implementation is working. If you like the general idea, I'll post a
> > > review request later with my changes.
> > >
> > > --
> > > ---> Dirk Jagdmann
> > > ----> http://cubic.org/~doj
> > > -----> http://llg.cubic.org
> > >
> > > ----------------------------------------------------------------------
> > > -------- _______________________________________________
> > > Nfs-ganesha-devel mailing list
> > > Nfs-ganesha-devel@lists.sourceforge.net
> > > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> > >
> > 
> > 
> >
> ----------------------------------------------------------------------------
> --
> > Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
> > Get real-time metrics from all of your servers, apps and tools in one
> place.
> > SourceForge users - Click here to start your Free Trial of Datadog now!
> > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
> > _______________________________________________
> > Nfs-ganesha-devel mailing list
> > Nfs-ganesha-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 


------------------------------------------------------------------------------
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to