On Mon, 27 Jun 2011 10:54:48 -0700
Eric Anholt <[email protected]> wrote:

> > +   for (gpu_freq = dev_priv->max_delay; gpu_freq >= dev_priv->min_delay;
> > +        gpu_freq--) {
> > +           int diff = dev_priv->max_delay - gpu_freq;
> > +
> > +           /*
> > +            * For GPU frequencies less than 750MHz, just use the lowest
> > +            * ring freq.
> > +            */
> > +           if (gpu_freq < min_freq)
> > +                   ia_freq = 800;
> > +           else
> > +                   ia_freq = max_ia_freq - ((diff * scaling_factor) / 2);
> > +           ia_freq = DIV_ROUND_CLOSEST(ia_freq, 100);  
> 
> If the GPU has a wide enough clock range (diff large) and the CPU is low
> enough clocked (max_ia_freq low now), could we end up with the ia_freq <
> 800, and would that be a bad thing?  In other words, should
> scaling_factor be non-constant?

scaling_factor probably should be non-constant, but I don't know what
function it should follow.

ia_freq < 800 shouldn't break anything, but would probably result in
sub-optimal GPU performance.  OTOH it would save power...

-- 
Jesse Barnes, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to