On Thu, May 6, 2021 at 10:55 AM Olivier Yiptong <oyipt...@chromium.org>
wrote:
>
> Hi, and thank you for your prompt response. Replies inline.
>>
>>    1. CPU utilization isn't something which can be easily computed or
>>    reasoned on asymmetric multi-core CPUs, not to mention the dynamic
>>    adjustment of CPU frequency further complicates the matter.
>
>
> Are you alluding to CPU architectures like big.LITTLE? and of dynamic
clock frequencies?
>
> This proposal takes this into account, here's how:
>
...
>
> The goal of CPU Speed is to abstract away those details and provide clock
frequency data that is actionable and yet preserves user privacy.

The issue isn't that it's unclear to come up with some number that
represents the current CPU load. I'm sure we can come up with some number.
The issue lies in how such a number can be interpreted.

In our experience with iOS and macOS, CPU utilization & speed are poor
metric to use in order to adjust any software behavior because those things
are highly dynamic and respond quickly based on what an application is
doing. It's a lot better to *measure* the actual runtime taken to do a
specific task and adjust the behavior accordingly.

>>    2. Whether the system itself is under a heavy CPU load or not should
not
>>    have any bearing on how much CPU time a website is entitled to use
because
>>    the background CPU utilization may spontaneously change, and the
reason of
>>    a high or a low CPU utilization may depend on what the website is
doing;
>>    e.g. a daemon which wakes up in a response to a network request or
some
>>    file access.
>
> On the web, people write one-size-fits-all applications. Those need to
run across devices of varying compute power capabilities.
> For compute-intensive applications, this poses problems and leads to bad
UX in the absence of hints from the system.

This is precisely why we can't rely on CPU utilization or speed to
determine how fast the application or specific task thereof will complete.
There is a huge variability in each CPU's instructions per cycle, and how
much work can be performed in each cycle. The size of L1/L2/L3, cache
coherency mechanisms (with other cores potentially), prefetcher, the
capability and size of the branch predictor, etc... can all influence how
fast a given application will run. We can't estimate how fast an
application will run based purely on the percentage of CPU utilization or
at what fraction of the maximum frequency / power CPU is operating.

> The proposal does not enforce any additional usage of resources, but
instead allows apps to make informed decisions.
> It is common for compute-intensive applications to self-throttle to
provide a good UX.
> One example is gaming: reducing drawing distance, effects, texture sizes
or level of detail for geometries if it's affecting frame rate.

Those games are better off dynamically adjusting their behavior based on
how long each frame is taking to draw.

> On the Nintendo Switch, game engines have the feature to reduce the
rendering resolution when framerate drops occur or are anticipated.
> On the Nintendo Switch, the compute power capability depends on whether
the device is plugged in or in portable mode.
> There might also be thermal factors.

Here is the thing. On Apple's platforms, if an application adjusts itself
to do less work, then we'd start throttling CPU or stop using P-cores
automatically to conserve the battery so then you might drop frames because
CPU is running at a slower speed or no longer running in P-cores and takes
sometime to ramp up again. It is impractical for an application to respond
to these changes based on CPU utilization or speed information because the
states are changing so dynamically over time.

It's also highly inappropriate for a web app to assume that it can use all
the remaining CPU resources when there could be other windows and
applications the user is interacting with.

> For the Compute Pressure API, we've examined a few use-cases, and they
are detailed in the explainer.
> This is similar to video conferencing needs, reducing the number of
simultaneous video feeds, or diminishing the processing of image processing
effects.
>
> Indeed, the reason for the load might be intrinsic to the application, or
extrinsic, based on what's going on with the system.
> This API, as proposed, provides system-level hints that help applications
make informed decisions to provide a good UX for users.

We're very much uncomfortable with exposing this kind of invasive system
information in a Web API, and more importantly, web applications to adjust
its workload based on such information. JavaScriptCore's is a highly
sophisticated JIT engine that is known to perform very well across variety
of hardware classes and generations; yet it doesn't adjust its workload
based on CPU utilization or power state of the system. Given that, we are
highly skeptical with your premise that an API like this is needed to
create a performant application in the first place.

The explainer of this API mentions about not letting the device heat up.
This is a job of the user agent and the underlying operating system, not
individual of application, let alone websites and web apps. We're also
highly skeptical that this API will allow web apps to measure its power
impact of a new feature / behavior change. The power consumption depends on
a variety of things. For example, executing a task fast at 100% so that CPU
can go sleep quickly is often better than running at ~10% capacity for a
long period of time.

I can go on and on for many many paragraphs but I don't think it's
productive use of my time so I'd stop here. Please don't take the lack of
future response as an evidence of support or indifference unless we
explicitly say otherwise. We are and will be against this API unless
otherwise stated in a written form elsewhere.

- R. Niwa
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev

Reply via email to