On Wed, 30 Jul 2025 17:47:52 -0400
Peter Xu <pet...@redhat.com> wrote:

> On Wed, Jul 30, 2025 at 02:39:29PM +0200, Igor Mammedov wrote:
> > diff --git a/system/memory.c b/system/memory.c
> > index 5646547940..9a5a262112 100644
> > --- a/system/memory.c
> > +++ b/system/memory.c
> > @@ -2546,6 +2546,12 @@ void 
> > memory_region_clear_flush_coalesced(MemoryRegion *mr)
> >      }
> >  }
> >  
> > +void memory_region_enable_lockless_io(MemoryRegion *mr)
> > +{
> > +    mr->lockless_io = true;

    /*                                                                          
                                                             
     * reentrancy_guard has per device scope, that when enabled                 
                                                             
     * will effectively prevent concurrent access to device's IO                
                                                             
     * MemoryRegion(s) by not calling accessor callback.                        
                                                             
     *                                                                          
                                                             
     * Turn it off for lock-less IO enabled devices, to allow                   
                                                             
     * concurrent IO.                                                           
                                                             
     * TODO: remove this when reentrancy_guard becomes per transaction.         
 
     */  

would something like this be sufficient?

> > +    mr->disable_reentrancy_guard = true;  
> 
> IIUC this is needed only because the re-entrancy guard is not
> per-transaction but per-device, am I right?
> 
> Maybe some comment would be nice here to explain how mmio concurrency could
> affect this.  If my above comment is correct, it could also be a TODO so we
> could re-enable this when it is per-transaction (even though I don't know
> whether it's easy / useful to do..).
> 
> Thanks,
> 


Reply via email to