Re: Should drivers like nvme let userspace control their latency via dev_pm_qos?

2016-09-22 Thread Andy Lutomirski
On Thu, Sep 22, 2016 at 6:26 PM, Rafael J. Wysocki
 wrote:
> On 9/16/2016 5:26 PM, Andy Lutomirski wrote:
>>
>> I'm adding power management to the nvme driver, and I'm exposing
>> exactly one knob via sysfs: the maximum permissible latency.  This
>> isn't a power domain issue, and it has no dependencies -- it's
>> literally just the maximum latency that the driver may impose on I/O
>> for power saving purposes.
>>
>> ISTM userspace should be able to specify its own latency tolerance in
>> a uniform way, and dev_pm_qos seems like the natural interface for
>> this, except that I cannot find a single instance in the tree of *any*
>> driver using it via the notifier mechanism.
>
>
> That's because the notifier mechanism is only used for the "resume latency"
> type of constraints.
>
>> I can find two drivers that do it using
>> dev_pm_qos_expose_latency_tolerance(), and both are LPSS drivers?
>
>
> That's correct.  Nobody else has used it so far. :-)
>
>> So: should I be exposing .set_latency_tolerance() or should I just use
>> a custom sysfs attribute?  Or both?
>
>
> dev_pm_qos_expose_latency_tolerance() adds a single latency tolerance
> request object to the device and exposes a knob in user space by which that
> request object can be controlled.  There may be more latency tolerance
> request objects for the same device if kernel code adds them.  The effective
> latency tolerance is the minimum of all those requests and the callback is
> invoked every time that effective value changes.
>
> This also is described in the last section of
> Documentation/power/pm_qos_interface.txt (note that if the
> .set_latency_tolerance callback is present at the device registration time
> already, the latency tolerance sysfs attribute will be exposed automatically
> by the driver core).
>
> If that mechanism is suitable for the use case in question, I'd just use it.

OK, I'll play with it.


Re: Should drivers like nvme let userspace control their latency via dev_pm_qos?

2016-09-22 Thread Rafael J. Wysocki

On 9/16/2016 5:26 PM, Andy Lutomirski wrote:

I'm adding power management to the nvme driver, and I'm exposing
exactly one knob via sysfs: the maximum permissible latency.  This
isn't a power domain issue, and it has no dependencies -- it's
literally just the maximum latency that the driver may impose on I/O
for power saving purposes.

ISTM userspace should be able to specify its own latency tolerance in
a uniform way, and dev_pm_qos seems like the natural interface for
this, except that I cannot find a single instance in the tree of *any*
driver using it via the notifier mechanism.


That's because the notifier mechanism is only used for the "resume 
latency" type of constraints.



I can find two drivers that do it using dev_pm_qos_expose_latency_tolerance(), 
and both are LPSS drivers?


That's correct.  Nobody else has used it so far. :-)


So: should I be exposing .set_latency_tolerance() or should I just use
a custom sysfs attribute?  Or both?


dev_pm_qos_expose_latency_tolerance() adds a single latency tolerance 
request object to the device and exposes a knob in user space by which 
that request object can be controlled.  There may be more latency 
tolerance request objects for the same device if kernel code adds them.  
The effective latency tolerance is the minimum of all those requests and 
the callback is invoked every time that effective value changes.


This also is described in the last section of 
Documentation/power/pm_qos_interface.txt (note that if the 
.set_latency_tolerance callback is present at the device registration 
time already, the latency tolerance sysfs attribute will be exposed 
automatically by the driver core).


If that mechanism is suitable for the use case in question, I'd just use it.

Thanks,

Rafael




Re: Should drivers like nvme let userspace control their latency via dev_pm_qos?

2016-09-16 Thread Andy Lutomirski
On Fri, Sep 16, 2016 at 8:54 AM, One Thousand Gnomes
 wrote:
> On Fri, 16 Sep 2016 08:26:03 -0700
> Andy Lutomirski  wrote:
>
>> I'm adding power management to the nvme driver, and I'm exposing
>> exactly one knob via sysfs: the maximum permissible latency.  This
>> isn't a power domain issue, and it has no dependencies -- it's
>> literally just the maximum latency that the driver may impose on I/O
>> for power saving purposes.
>
> Why is this in the driver. Surely the latency is a property of the
> request queue and the requests being made. Now it may well be that its
> implement as min(list-of-queues) but a device sysfs node seems a strange
> place to stick it.
>

I'm not sure what you mean.  The whole device can be programmed to
take a nap when fully idle.  The driver can limit how deep that nap is
and thus how long the next request can be delayed while the device
wakes back up.

Unlike the very small number of in-tree users of this type of
mechanism that I can find, there are no busses or hard tolerances
involved.  The only effect is a power consumption vs I/O performance
tradeoff.

--Andy


Re: Should drivers like nvme let userspace control their latency via dev_pm_qos?

2016-09-16 Thread One Thousand Gnomes
On Fri, 16 Sep 2016 08:26:03 -0700
Andy Lutomirski  wrote:

> I'm adding power management to the nvme driver, and I'm exposing
> exactly one knob via sysfs: the maximum permissible latency.  This
> isn't a power domain issue, and it has no dependencies -- it's
> literally just the maximum latency that the driver may impose on I/O
> for power saving purposes.

Why is this in the driver. Surely the latency is a property of the
request queue and the requests being made. Now it may well be that its
implement as min(list-of-queues) but a device sysfs node seems a strange
place to stick it.

Alan