* Jason Evans <[EMAIL PROTECTED]> [001212 18:39] wrote:
> On Tue, Dec 12, 2000 at 05:59:37PM -0800, Alfred Perlstein wrote:
> > * Jason Evans <[EMAIL PROTECTED]> [001212 14:32] wrote:
> > > A safe version:
> > > ---------------------------------------------------------------------------
> > > #define   MEXTFREE(m) do {                                                \
> > >   struct mbuf *_mmm = (m);                                        \
> > >                                                                   \
> > >   MEXT_REM_REF(_mmm);                                             \
> > >   if (!atomic_cmpset_long(&_mmm->m_ext.ref_cnt->refcnt, 0, 1)) {  \
> > >           /*                                                      \
> > >            * Do not free; there are still references, or another  \
> > >            * thread is freeing.                                   \
> > >            */                                                     \
> > >   } else if (_mmm->m_ext.ext_type != EXT_CLUSTER) {               \
> > >           (*(_mmm->m_ext.ext_free))(_mmm->m_ext.ext_buf,          \
> > >               _mmm->m_ext.ext_args);                              \
> > >           _MEXT_DEALLOC_CNT(_mmm->m_ext.ref_cnt);                 \
> > >   } else {                                                        \
> > >           _MCLFREE(_mmm->m_ext.ext_buf);                          \
> > >           _MEXT_DEALLOC_CNT(_mmm->m_ext.ref_cnt);                 \
> > >   }                                                               \
> > >   _mmm->m_flags &= ~M_EXT;                                        \
> > > } while (0)
> > > ---------------------------------------------------------------------------
> > > 
> > > What this does is have all threads unconditionally atomically decrement.
> > > Then, in order to determine whether to clean up, the threads to an atomic
> > > compare and set on the refcount, so that if it is 0, only one thread
> > > manages to modify the refcount, which then signifies that it will do the
> > > cleanup.
> > 
> > You've just proved why we need atomic ops that aren't so clumsy.
> > 
> > Your solution doesn't work, the code is broken.
> > 
> > You can't use atomic_cmpset_long to do a counter like that, it's
> > the same race, you're just seeing if it goes to zero, now if it
> > doesn't how do you decrement it?  Use atomic_subtract_long? no,
> > same race as the old code.
> 
> Please read my code and explanation again.  You seem to have missed the
> order of arguments to atomic_cmpset_long().  The cmpset call detects
> whether the refcount is already 0, and only allows one thread to increment
> it; that thread then does the cleanup.

Ok, can you please commit the fix then?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message

Reply via email to