>
>
>There are further complexities, because the driver cannot know if any
>upstream callers are still busy until it calls mac_unregister().  And
>then, because mac_unregister *also* frees the structure, its too late.
>  
>

If there are indeed upstream callers, mi_ref is non-zero and 
mac_register will fail. So it should be sufficient, if the driver 
quiesces the chip, and become single threaded on the mac endpoint before 
calling mac_unregister to avoid the race. Another question, what does 
xxx_m_stop really mean ? Why can't the driver quiesce everything at this 
point itself ? If mac_unregister() succeeds, it implies that an 
xxx_m_stop has happened before that. It doesn't seem right if we can 
still have upcall threads and notifications going on even after 
mac_stop() has finished successfully.

Thirumalai

Garrett D'Amore wrote:

>On Fri, 2007-09-07 at 08:25 -0400, Sebastien Roy wrote:
>  
>
>>Paul Durrant wrote:
>>    
>>
>>>On 06/09/07, Garrett D'Amore <[EMAIL PROTECTED]> wrote:
>>>      
>>>
>>>>(Obviously mac_alloc() here would return "mac_t *", but under the covers
>>>>it is really allocating a whole mac_impl_t.  And mac_free() would be
>>>>free()'ing a a whole mac_impl_t, though it is taking a mac_t as its
>>>>argument.)
>>>>
>>>>If we want to pursue this course of action, I can put together a webrev
>>>>with the changes... it will take me a while because there are a *lot* of
>>>>drivers to change... but I want to get this change in *before* we make
>>>>GLDv3 public.
>>>>        
>>>>
>>>FWIW, I find the current mac_register_t scheme ugly; I don't see why a
>>>statically defined mac_register_t cannot simply be passed into
>>>mac_register(). I don't see why we need separate allocator functions
>>>there.
>>>      
>>>
>>It's done this way so that the structure can be expanded without 
>>affecting binary compatibility.  We can add a new field to the structure, 
>>and the system won't crash and burn when an existing driver registers. 
>>The framework will allocate an appropriately sized structure, but the 
>>older driver simply won't fill in the new fields it doesn't know about.
>>
>>    
>>
>>>I also don't see that you can't break this race using reference
>>>counting. There's no need for mac_unregister() to actually free the
>>>mac_impl_t; it could just drop a ref. and the last man out frees.
>>>Let's try not to complicate the interface because of implementation
>>>artifacts.
>>>      
>>>
>>I don't believe a reference counting scheme makes a difference in the 
>>scenario that Thiru described.  He's describing a scenario where the 
>>mac_impl_t is freed by mac_unregister() before another thread gets a 
>>chance to examine that mac_impl_t to verify its contents (either for a 
>>flag or a reference count, it doesn't matter, the end result is still the 
>>same; boom!)
>>
>>IMO, this case is borderline a driver bug.  The driver has called some 
>>MAC function (which hasn't returned yet), and decides to call 
>>mac_unregister().  mac_unregister() is documented to invalidate the MAC 
>>handle, so drivers shouldn't knowingly call mac_unregister() while using 
>>that handle, or use the handle during or after calls to mac_unregister().
>>
>>To me, this is a similar problem to calling free() on a buffer that an 
>>application knows that it's still using...  Synchronization on the 
>>allocation and deallocation of that buffer can't be done by the memory 
>>framework, it has to be done by the application.  The framework can't 
>>possibly know what the application is going to do with that buffer once 
>>the free() function has been called.
>>    
>>
>
>There are further complexities, because the driver cannot know if any
>upstream callers are still busy until it calls mac_unregister().  And
>then, because mac_unregister *also* frees the structure, its too late.
>
>Forcing drivers to work around this will incur substantial code
>complexity across every Nemo driver.  (Think an extra lock, flag, and
>condition variable used with *every* place that a driver calls up into
>Nemo.)  I think that is a poor decision.
>
>       -- Garrett
>
>
>  
>

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to