Hi Yiwei
After some deliberation on how to move forward with my BO Labeling patches[1],
we've come up with the following structure for debugfs entries:
/debugfs/dri/128/bo//label
/debugfs/dri/128/bo//size
My initial idea was to count the total memory allocated for a particular label
in kernel
Hi Yiwei
On jueves, 19 de diciembre de 2019 19:52:26 (CET) Yiwei Zhang wrote:
> Hi Rohan,
>
> Thanks for pointing out the pids issue! Then the index would be {namespace
> + pid(in that namespace)}. I'll grab a setup and play with the driver to
> see what I can do. I know how to find an Intel or
Hey
> Is it reasonable to add another ioctl or something equivalent to label
> a BO with what PID makes the allocation? When the BO gets shared to
> other processes, this information also needs to be bookkept somewhere
> for tracking. Basically I wonder if it's possible for upstream to
> track
Hi folks,
Would we be able to track the below for each of the graphics kmds:
(1) Global total memory
(2) Per-process total memory
(3) Per-process total memory not mapped to userland -> when it's
mapped it's shown in RSS, so this is to help complete the picture of
RSS
Would it be better reported
Thanks for all the comments and feedback, and they are all so valuable to me.
Let me summarize the main concerns so far here:
(1) Open source driver never specifies what API is creating a gem
object (opengl, vulkan, ...) nor what purpose (transient, shader,
...).
(2) The ioctl to label anything
Hi folks,
What do you think about:
> For the sysfs approach, I'm assuming the upstream vendors still need
> to provide a pair of UMD and KMD, and this ioctl to label the BO is
> kept as driver private ioctl. Then will each driver just define their
> own set of "label"s and the KMD will only
On Tue, Nov 12, 2019 at 10:17:10AM -0800, Yiwei Zhang wrote:
> Hi folks,
>
> What do you think about:
> > For the sysfs approach, I'm assuming the upstream vendors still need
> > to provide a pair of UMD and KMD, and this ioctl to label the BO is
> > kept as driver private ioctl. Then will each
For the sysfs approach, I'm assuming the upstream vendors still need
to provide a pair of UMD and KMD, and this ioctl to label the BO is
kept as driver private ioctl. Then will each driver just define their
own set of "label"s and the KMD will only consume the corresponding
ones so that the sysfs
On Tue, Nov 5, 2019 at 1:47 AM Daniel Vetter wrote:
>
> On Mon, Nov 04, 2019 at 11:34:33AM -0800, Yiwei Zhang wrote:
> > Hi folks,
> >
> > (Daniel, I just moved you to this thread)
> >
> > Below are the latest thoughts based on all the feedback and comments.
> >
> > First, I need to clarify on
On Tue, Nov 05, 2019 at 11:45:28AM -0800, Yiwei Zhang wrote:
> Hi Daniel,
>
> > - The labels are currently free-form, baking them back into your structure
> > would mean we'd need to do lots of hot add/remove of sysfs directory
> > trees. Which sounds like a real bad idea :-/
> Given the free
Hi Daniel,
> - The labels are currently free-form, baking them back into your structure
> would mean we'd need to do lots of hot add/remove of sysfs directory
> trees. Which sounds like a real bad idea :-/
Given the free form of that ioctl, what's the plan of using that and
the reporting of the
On Mon, Nov 04, 2019 at 11:34:33AM -0800, Yiwei Zhang wrote:
> Hi folks,
>
> (Daniel, I just moved you to this thread)
>
> Below are the latest thoughts based on all the feedback and comments.
>
> First, I need to clarify on the gpu memory object type enumeration
> thing. We don't want to
Hi folks,
(Daniel, I just moved you to this thread)
Below are the latest thoughts based on all the feedback and comments.
First, I need to clarify on the gpu memory object type enumeration
thing. We don't want to enforce those enumerations across the upstream
and Android, and we should just
On Thu, 31 Oct 2019 13:57:00 -0400
Kenny Ho wrote:
> Hi Yiwei,
>
> This is the latest series:
> https://patchwork.kernel.org/cover/11120371/
>
> (I still need to reply some of the feedback.)
>
> Regards,
> Kenny
>
> On Thu, Oct 31, 2019 at 12:59 PM Yiwei Zhang wrote:
> >
> > Hi Kenny,
> >
>
Hi Kenny,
Thanks for the info. Do you mind forwarding the existing discussion to me
or have me cc'ed in that thread?
Best,
Yiwei
On Wed, Oct 30, 2019 at 10:23 PM Kenny Ho wrote:
> Hi Yiwei,
>
> I am not sure if you are aware, there is an ongoing RFC on adding drm
> support in cgroup for the
Hi Yiwei,
This is the latest series:
https://patchwork.kernel.org/cover/11120371/
(I still need to reply some of the feedback.)
Regards,
Kenny
On Thu, Oct 31, 2019 at 12:59 PM Yiwei Zhang wrote:
>
> Hi Kenny,
>
> Thanks for the info. Do you mind forwarding the existing discussion to me or
>
> What about getting a coherent view of the total GPU private memory
> consumption of a single process? I think the same caveat and solution
> would apply.
For the coherency issue, now I understand your concerns. Let me re-think
and come back. A total value per process is an option if we'd like
Hi folks,
Didn't realize gmail has a plain text mode ; )
> In my opinion tracking per process is good, but you cannot sidestep the
> question of tracking performance by saying that there is only few
> processes using the GPU.
Agreed, I shouldn't make that statement. Thanks for the info as well!
Hi Yiwei,
I am not sure if you are aware, there is an ongoing RFC on adding drm
support in cgroup for the purpose of resource tracking. One of the
resource is GPU memory. It's not exactly the same as what you are
proposing (it doesn't track API usage, but it tracks the type of GPU
memory from
On Mon, 28 Oct 2019 11:33:57 -0700
Yiwei Zhang wrote:
> On Mon, Oct 28, 2019 at 8:26 AM Jerome Glisse wrote:
>
> > On Fri, Oct 25, 2019 at 11:35:32AM -0700, Yiwei Zhang wrote:
> > > Hi folks,
> > >
> > > This is the plain text version of the previous email in case that was
> > > considered
On Fri, Oct 25, 2019 at 11:35:32AM -0700, Yiwei Zhang wrote:
> Hi folks,
>
> This is the plain text version of the previous email in case that was
> considered as spam.
>
> --- Background ---
> On the downstream Android, vendors used to report GPU private memory
> allocations with debugfs nodes
Hi folks,
This is the plain text version of the previous email in case that was
considered as spam.
--- Background ---
On the downstream Android, vendors used to report GPU private memory
allocations with debugfs nodes in their own formats. However, debugfs nodes
are getting deprecated in the
22 matches
Mail list logo