On Thu, 2017-12-21 at 13:37 +0100, Maarten Lankhorst wrote:
> Hey,
> 
> Op 19-12-17 om 06:26 schreef Dhinakaran Pandiyan:
> > Convert the power_domains->domain_use_count array that tracks per-domain
> > use count to atomic_t type. This is needed to be able to read/write the use
> > counts outside of the power domain mutex.
> >
> > Cc: Daniel Vetter <daniel.vet...@ffwll.ch>
> > Cc: Ville Syrjälä <ville.syrj...@linux.intel.com>
> > Cc: Rodrigo Vivi <rodrigo.v...@intel.com>
> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandi...@intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_debugfs.c     |  2 +-
> >  drivers/gpu/drm/i915/i915_drv.h         |  2 +-
> >  drivers/gpu/drm/i915/intel_runtime_pm.c | 11 +++++------
> >  3 files changed, 7 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
> > b/drivers/gpu/drm/i915/i915_debugfs.c
> > index 1a7b28f62570..1f1d9162f2c2 100644
> > --- a/drivers/gpu/drm/i915/i915_debugfs.c
> > +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> > @@ -2764,7 +2764,7 @@ static int i915_power_domain_info(struct seq_file *m, 
> > void *unused)
> >             for_each_power_domain(power_domain, power_well->domains)
> >                     seq_printf(m, "  %-23s %d\n",
> >                              intel_display_power_domain_str(power_domain),
> > -                            power_domains->domain_use_count[power_domain]);
> > +                            
> > atomic_read(&power_domains->domain_use_count[power_domain]));
> >     }
> >  
> >     mutex_unlock(&power_domains->lock);
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h 
> > b/drivers/gpu/drm/i915/i915_drv.h
> > index 1e4e613e7b41..ddadeb9eaf49 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -1489,7 +1489,7 @@ struct i915_power_domains {
> >     int power_well_count;
> >  
> >     struct mutex lock;
> > -   int domain_use_count[POWER_DOMAIN_NUM];
> > +   atomic_t domain_use_count[POWER_DOMAIN_NUM];
> >     struct i915_power_well *power_wells;
> >  };
> >  
> > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c 
> > b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > index 96ab74f3d101..992caec1fbc4 100644
> > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > @@ -1453,7 +1453,7 @@ __intel_display_power_get_domain(struct 
> > drm_i915_private *dev_priv,
> >     for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
> >             intel_power_well_get(dev_priv, power_well);
> >  
> > -   power_domains->domain_use_count[domain]++;
> > +   atomic_inc(&power_domains->domain_use_count[domain]);
> >  }
> >  
> >  /**
> > @@ -1539,10 +1539,9 @@ void intel_display_power_put(struct drm_i915_private 
> > *dev_priv,
> >  
> >     mutex_lock(&power_domains->lock);
> >  
> > -   WARN(!power_domains->domain_use_count[domain],
> > -        "Use count on domain %s is already zero\n",
> > +   WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
> > +        "Use count on domain %s was already zero\n",
> >          intel_display_power_domain_str(domain));
> > -   power_domains->domain_use_count[domain]--;
> >  
> >     for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
> >             intel_power_well_put(dev_priv, power_well);
> > @@ -3049,7 +3048,7 @@ static void intel_power_domains_dump_info(struct 
> > drm_i915_private *dev_priv)
> >             for_each_power_domain(domain, power_well->domains)
> >                     DRM_DEBUG_DRIVER("  %-23s %d\n",
> >                                      intel_display_power_domain_str(domain),
> > -                                    
> > power_domains->domain_use_count[domain]);
> > +                                    
> > atomic_read(&power_domains->domain_use_count[domain]));
> >     }
> >  }
> >  
> > @@ -3092,7 +3091,7 @@ void intel_power_domains_verify_state(struct 
> > drm_i915_private *dev_priv)
> >  
> >             domains_count = 0;
> >             for_each_power_domain(domain, power_well->domains)
> > -                   domains_count += 
> > power_domains->domain_use_count[domain];
> > +                   domains_count += 
> > atomic_read(&power_domains->domain_use_count[domain]);
> >  
> >             if (power_well->count != domains_count) {
> >                     DRM_ERROR("power well %s refcount/domain refcount 
> > mismatch "
> 
> I can imagine this will start failing really badly. The previous code assumed 
> that
> everything is protected by power_domains->lock, and now this changes makes it 
> no
> longer the case..
> 

This won't fail until the next patch where it is read outside of the
mutex. And that patch reads these values within the new spin_lock. I was
trying to split the changes so that the next patch does not become too
heavy.

> I see the rest of the code changes things even more, but it would be better 
> if the
> locking rework was done in a single patch, and not bolted on..
> 
I see your point, I can squash them together.

> And instead of using atomic_t, there is a refcount implementation in 
> refcount.h,
> it could be used here for locking power wells only if it would drop to zero..
> 

So, the power_wells have another refcount (controls the power well
enable and disable), which needs the lock. Not very clear why we need to
lock the power wells if the domain_use_count goes to zero. The
domain_use_count array that I am converting over to atomic_t is used for
debug and verifying that the power well users are accounted for. It does
not control any hardware state. And the reason I am converting it to
atomic_t is to update it outside the spin locks. Let me know if my
understand is wrong.



> Cheers,
> Maarten
> 
> _______________________________________________
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to