On Mon, 28 Oct 2019 11:33:57 -0700
Yiwei Zhang <zzyi...@google.com> wrote:

> On Mon, Oct 28, 2019 at 8:26 AM Jerome Glisse <jgli...@redhat.com> wrote:
> 
> > On Fri, Oct 25, 2019 at 11:35:32AM -0700, Yiwei Zhang wrote:  
> > > Hi folks,
> > >
> > > This is the plain text version of the previous email in case that was
> > > considered as spam.

Hi,

you still had a HTML attachment. More comments below.

> > >
> > > --- Background ---
> > > On the downstream Android, vendors used to report GPU private memory
> > > allocations with debugfs nodes in their own formats. However, debugfs  
> > nodes  
> > > are getting deprecated in the next Android release.  

...

> > For the 2nd level "pid", there are usually just a couple of them per  
> > > snapshot, since we only takes snapshot for the active ones.  
> >
> > ? Do not understand here, you can have any number of applications with
> > GPU objects ? And thus there is no bound on the number of PID. Please
> > consider desktop too, i do not know what kind of limitation android
> > impose.
> >  
> 
> We are only interested in tracking *active* GPU private allocations. So
> yes, any
> application currently holding an active GPU context will probably has a
> node here.
> Since we want to do profiling for specific apps, the data has to be per
> application
> based. I don't get your concerns here. If it's about the tracking overhead,
> it's rare
> to see tons of application doing private gpu allocations at the same time.
> Could
> you help elaborate a bit?

Toolkits for the Linux desktop, at least GTK 4, are moving to
GPU-accelerated rendering by default AFAIK. This means that every
application using such toolkit will have an active GPU context created
and used at all times. So potentially every single end user application
running in a system may have a GPU context, even a simple text editor.

In my opinion tracking per process is good, but you cannot sidestep the
question of tracking performance by saying that there is only few
processes using the GPU.

What is an "active" GPU private allocation? This implies that there are
also inactive allocations, what are those?


Let's say you have a bunch of apps and the memory consumption is spread
out into sysfs files like you propose. How would one get a coherent
view of total GPU private memory usage in a system? Iterating through
all sysfs files in userspace and summing up won't work, because each
file will be sampled at a different time, which means the result is not
coherent. Separate files for accumulated statistics perhaps?

What about getting a coherent view of the total GPU private memory
consumption of a single process? I think the same caveat and solution
would apply.


Thanks,
pq

Attachment: pgp1wUSqQLJQt.pgp
Description: OpenPGP digital signature

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to