Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Daniel Vetter
On Fri, Nov 08, 2019 at 11:23:27AM -0500, Sean Paul wrote:
> On Fri, Nov 08, 2019 at 09:59:19AM +0100, Daniel Vetter wrote:
> > On Fri, Nov 08, 2019 at 09:46:48AM +0100, Daniel Vetter wrote:
> > > On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote:
> > > > On Thu,  7 Nov 2019 16:02:59 -0500
> > > > Sean Paul  wrote:
> > > > 
> > > > > From: Sean Paul 
> > > > > 
> > > > > Hey all,
> > > > > I'm back with another trace events patchset. My first attempt [1] went
> > > > > better than expected, with enthusiasm for the idea and distain for the
> > > > > implementation.
> > > > > 
> > > > > As promised, I went through and added proper trace events.
> > > > > 
> > > > > Before I get _too_ far, I wanted to post this set to get feedback on 
> > > > > the
> > > > > direction I'm going. I've gone back and forth on whether to introduce 
> > > > > a
> > > > > bunch of trace systems vs using the trace class enum. I've settled on
> > > > > the trace class enum since it seems more extensible and easier to use 
> > > > > in
> > > > > production that just having the blunt "enable this system", or
> > > > > the tedious "enable each event I'm interested in individually" 
> > > > > methods.
> > > > > 
> > > > > So, consider this one an RFC unless we're all in agreement, in which
> > > > > case we should apply it :)
> > > > >
> > > > 
> > > > Hi Sean,
> > > > 
> > > > thanks for working on this. I'm not at all familiar with the trace
> > > > infrastructure, so I'd like to know how flight recorder type of
> > > > behaviour (a continuously written fixed size ring buffer) can be
> > > > achieved. Let us have a display server in userspace which in production
> > > > hits an unexpected condition with DRM and needs to record information
> > > > for post-mortem debugging. What should it do to have a trace log after
> > > > the fact to attach with a bug report?
> > > 
> > > Yeah I think this has the uapi smell, so would need the userspace side
> > > too, once we agree on something.
> > > 
> 
> For our (Chrome OS) use, the uapi is pretty simple:
> 
> #! /usr/bin/env
> 
> mask=$1
> 
> pushd /sys/kernel/tracing
> echo "event_class & $mask" > events/drm/filter
> echo 1 > events/drm/enable
> echo 1 > tracing_on
> popd
> 
> 
> Then when we need to log something:
> 
> cat /sys/kernel/tracing/trace_pipe
> 
> 
> 
> > > > I assume that is a major use case for this new feature. Is performance
> > > > profiling another use case?
> 
> Not at the present.
> 
> 
> > > > 
> > > > Is it ok to build userspace to rely on these trace events during normal
> > > > operations, e.g. for continuous adjustment of timings/timers?
> > > 
> > > Yeah I'm kinda freaked out about this. In the old thread we discussed the
> > > question of whether dumping dmesg into tracepoints would make them uapi,
> > > but no conclusion was reached.
> > > 
> > > Now instead we're going to duplicate a lot of these debug outputs, with
> > > specific trace events, which is going to make them even more uapi. And a
> > > lot of these are very much implementation details (drivers might change
> > > their opinion on which other states they need for check/commit, e.g. if
> > > they move a bunch of global state from crtcs into a new driver private
> > > internal object). Explicit tracepoints with symbol value imo will have a
> > > much higher chance that someone starts relying on them.
> > > 
> > > Also, dmesg is uapi too. Except there's a gentlemen's agreement that you
> > > should depend upon it because it will make kernel engineers really pissed
> > > if they can't change/move a specific string. So from that point I think
> > > just dumping the existing debug strings as unstructured strings into trace
> > > buffers is a much better idea. Plus no need to have the same information
> > > duplicated in both a debug output and a tracepoint.
> > > 
> > > I think even slightly structured debug tracepoints (e.g. a simple
> > > (drm_object_id, string) pair) would be too structured since then userspace
> > > could start watching for events on specific things and maybe use that for
> > > their retry loops after TEST_ONLY failures. Even though we have a lot of
> > > debug output strings where that bit of structure is essentially there
> > > already.
> > > 
> > > Aside from this a question on process ... I re-read the old thread and I'm
> > > not sure how you concluded we've reached consensus on this here, seemed
> > > all pretty inconclusive on the details and discussions died out?
> > 
> > Reading the thread again I guess it was Joonas comment that tracepoints
> > are also uapi, and you concluded we can't do this at all.
> > 
> > The trouble is, _everything_ that's observeable from userspace is uapi. So
> > dmesg, timing of the code, ... For a real funky example of this look at
> > the ext4 regression, where better code resulted in less entropy generated
> > and the system hung at boot-up.
> > 
> > But on the flip side, it's only uapi we can't change if someone depends
> > upon it. For 

Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Sean Paul
On Fri, Nov 08, 2019 at 09:59:19AM +0100, Daniel Vetter wrote:
> On Fri, Nov 08, 2019 at 09:46:48AM +0100, Daniel Vetter wrote:
> > On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote:
> > > On Thu,  7 Nov 2019 16:02:59 -0500
> > > Sean Paul  wrote:
> > > 
> > > > From: Sean Paul 
> > > > 
> > > > Hey all,
> > > > I'm back with another trace events patchset. My first attempt [1] went
> > > > better than expected, with enthusiasm for the idea and distain for the
> > > > implementation.
> > > > 
> > > > As promised, I went through and added proper trace events.
> > > > 
> > > > Before I get _too_ far, I wanted to post this set to get feedback on the
> > > > direction I'm going. I've gone back and forth on whether to introduce a
> > > > bunch of trace systems vs using the trace class enum. I've settled on
> > > > the trace class enum since it seems more extensible and easier to use in
> > > > production that just having the blunt "enable this system", or
> > > > the tedious "enable each event I'm interested in individually" methods.
> > > > 
> > > > So, consider this one an RFC unless we're all in agreement, in which
> > > > case we should apply it :)
> > > >
> > > 
> > > Hi Sean,
> > > 
> > > thanks for working on this. I'm not at all familiar with the trace
> > > infrastructure, so I'd like to know how flight recorder type of
> > > behaviour (a continuously written fixed size ring buffer) can be
> > > achieved. Let us have a display server in userspace which in production
> > > hits an unexpected condition with DRM and needs to record information
> > > for post-mortem debugging. What should it do to have a trace log after
> > > the fact to attach with a bug report?
> > 
> > Yeah I think this has the uapi smell, so would need the userspace side
> > too, once we agree on something.
> > 

For our (Chrome OS) use, the uapi is pretty simple:

#! /usr/bin/env

mask=$1

pushd /sys/kernel/tracing
echo "event_class & $mask" > events/drm/filter
echo 1 > events/drm/enable
echo 1 > tracing_on
popd


Then when we need to log something:

cat /sys/kernel/tracing/trace_pipe



> > > I assume that is a major use case for this new feature. Is performance
> > > profiling another use case?

Not at the present.


> > > 
> > > Is it ok to build userspace to rely on these trace events during normal
> > > operations, e.g. for continuous adjustment of timings/timers?
> > 
> > Yeah I'm kinda freaked out about this. In the old thread we discussed the
> > question of whether dumping dmesg into tracepoints would make them uapi,
> > but no conclusion was reached.
> > 
> > Now instead we're going to duplicate a lot of these debug outputs, with
> > specific trace events, which is going to make them even more uapi. And a
> > lot of these are very much implementation details (drivers might change
> > their opinion on which other states they need for check/commit, e.g. if
> > they move a bunch of global state from crtcs into a new driver private
> > internal object). Explicit tracepoints with symbol value imo will have a
> > much higher chance that someone starts relying on them.
> > 
> > Also, dmesg is uapi too. Except there's a gentlemen's agreement that you
> > should depend upon it because it will make kernel engineers really pissed
> > if they can't change/move a specific string. So from that point I think
> > just dumping the existing debug strings as unstructured strings into trace
> > buffers is a much better idea. Plus no need to have the same information
> > duplicated in both a debug output and a tracepoint.
> > 
> > I think even slightly structured debug tracepoints (e.g. a simple
> > (drm_object_id, string) pair) would be too structured since then userspace
> > could start watching for events on specific things and maybe use that for
> > their retry loops after TEST_ONLY failures. Even though we have a lot of
> > debug output strings where that bit of structure is essentially there
> > already.
> > 
> > Aside from this a question on process ... I re-read the old thread and I'm
> > not sure how you concluded we've reached consensus on this here, seemed
> > all pretty inconclusive on the details and discussions died out?
> 
> Reading the thread again I guess it was Joonas comment that tracepoints
> are also uapi, and you concluded we can't do this at all.
> 
> The trouble is, _everything_ that's observeable from userspace is uapi. So
> dmesg, timing of the code, ... For a real funky example of this look at
> the ext4 regression, where better code resulted in less entropy generated
> and the system hung at boot-up.
> 
> But on the flip side, it's only uapi we can't change if someone depends
> upon it. For dmesg most people understand that parsing dmesg is bad taste,
> so generally not a big problem. For tracepoints, especially if they come
> with lots of structured uapi, it's a lot more "this is baked in forever".
> But it doesn't have to be, with changed them quite a few times. But as a
> rule of thumb, the more 

Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Pekka Paalanen
On Fri, 8 Nov 2019 09:50:30 +0100
Daniel Vetter  wrote:

> On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote:
> > Is it ok to build userspace to rely on these trace events during normal
> > operations, e.g. for continuous adjustment of timings/timers?  
> 
> Aside discussion on this: If we add this (I think userspace might want
> some information about the point of no return and why we missed it) then I
> think we should expose that with an improved drm_event and clear
> semantics.
> 
> If we start to encourage compositors to use tracepoints to figure out why
> the kernel doesn't behave (TEST_ONLY failure, or missed flip target), and
> use that for behaviour, we've baked implementation details into the uapi.
> And then we're screwed.
> 
> So if you have any need for timing information that you see solved with a
> tracepoint, pls bring this up so we can add proper uapi. And yes I know
> that if someone doesn't we still need to keep that tracepoint working, but
> with all things uapi the question isn't whether we'll screw up (we will),
> but how often. And less often is massively better :-)

Haha, sorry, that particular question was practically trolling, but
OTOH something I can well imagine someone doing. Trying to fish out
reasons for TEST_ONLY failing is a much better example.

On third hand, profiling tools probably would depend on some trace
points specifically.

As for the uapi for a DRM flight recorder, I'd be happy to have that in
Weston as my time permits. Alas, it needs two people and I can be only
one: either the reviewer or the author. :-)

I can see two ways to use a DRM flight recorder:
- a file you grab as a whole after the fact, or
- a stream you subscribe to and store the results in your own ring
  buffer

The former is simpler, the latter would allow interleaving DRM and app
notes as they happen rather than trying to rebuild a log from
timestamps afterwards.


Thanks,
pq


pgp_tJfQYzDEV.pgp
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Daniel Vetter
On Fri, Nov 08, 2019 at 09:46:48AM +0100, Daniel Vetter wrote:
> On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote:
> > On Thu,  7 Nov 2019 16:02:59 -0500
> > Sean Paul  wrote:
> > 
> > > From: Sean Paul 
> > > 
> > > Hey all,
> > > I'm back with another trace events patchset. My first attempt [1] went
> > > better than expected, with enthusiasm for the idea and distain for the
> > > implementation.
> > > 
> > > As promised, I went through and added proper trace events.
> > > 
> > > Before I get _too_ far, I wanted to post this set to get feedback on the
> > > direction I'm going. I've gone back and forth on whether to introduce a
> > > bunch of trace systems vs using the trace class enum. I've settled on
> > > the trace class enum since it seems more extensible and easier to use in
> > > production that just having the blunt "enable this system", or
> > > the tedious "enable each event I'm interested in individually" methods.
> > > 
> > > So, consider this one an RFC unless we're all in agreement, in which
> > > case we should apply it :)
> > >
> > 
> > Hi Sean,
> > 
> > thanks for working on this. I'm not at all familiar with the trace
> > infrastructure, so I'd like to know how flight recorder type of
> > behaviour (a continuously written fixed size ring buffer) can be
> > achieved. Let us have a display server in userspace which in production
> > hits an unexpected condition with DRM and needs to record information
> > for post-mortem debugging. What should it do to have a trace log after
> > the fact to attach with a bug report?
> 
> Yeah I think this has the uapi smell, so would need the userspace side
> too, once we agree on something.
> 
> > I assume that is a major use case for this new feature. Is performance
> > profiling another use case?
> > 
> > Is it ok to build userspace to rely on these trace events during normal
> > operations, e.g. for continuous adjustment of timings/timers?
> 
> Yeah I'm kinda freaked out about this. In the old thread we discussed the
> question of whether dumping dmesg into tracepoints would make them uapi,
> but no conclusion was reached.
> 
> Now instead we're going to duplicate a lot of these debug outputs, with
> specific trace events, which is going to make them even more uapi. And a
> lot of these are very much implementation details (drivers might change
> their opinion on which other states they need for check/commit, e.g. if
> they move a bunch of global state from crtcs into a new driver private
> internal object). Explicit tracepoints with symbol value imo will have a
> much higher chance that someone starts relying on them.
> 
> Also, dmesg is uapi too. Except there's a gentlemen's agreement that you
> should depend upon it because it will make kernel engineers really pissed
> if they can't change/move a specific string. So from that point I think
> just dumping the existing debug strings as unstructured strings into trace
> buffers is a much better idea. Plus no need to have the same information
> duplicated in both a debug output and a tracepoint.
> 
> I think even slightly structured debug tracepoints (e.g. a simple
> (drm_object_id, string) pair) would be too structured since then userspace
> could start watching for events on specific things and maybe use that for
> their retry loops after TEST_ONLY failures. Even though we have a lot of
> debug output strings where that bit of structure is essentially there
> already.
> 
> Aside from this a question on process ... I re-read the old thread and I'm
> not sure how you concluded we've reached consensus on this here, seemed
> all pretty inconclusive on the details and discussions died out?

Reading the thread again I guess it was Joonas comment that tracepoints
are also uapi, and you concluded we can't do this at all.

The trouble is, _everything_ that's observeable from userspace is uapi. So
dmesg, timing of the code, ... For a real funky example of this look at
the ext4 regression, where better code resulted in less entropy generated
and the system hung at boot-up.

But on the flip side, it's only uapi we can't change if someone depends
upon it. For dmesg most people understand that parsing dmesg is bad taste,
so generally not a big problem. For tracepoints, especially if they come
with lots of structured uapi, it's a lot more "this is baked in forever".
But it doesn't have to be, with changed them quite a few times. But as a
rule of thumb, the more structured the data is you present to userspace,
the more likely they'll use it in actual control flow, and the more likely
a change will break userspace in a meaningful way.

Hence for debugging I think least amount of structure is best, plus huge
warnings that you really shouldn't use this for anything else than
debug logs. Anything more that leaks all these implementation details
gives me the chillies. And Pekka's question right off the bat with "can I
use this for tuning timers" kinda proves my point. If it loosk like
intentional uapi 

Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Daniel Vetter
On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote:
> Is it ok to build userspace to rely on these trace events during normal
> operations, e.g. for continuous adjustment of timings/timers?

Aside discussion on this: If we add this (I think userspace might want
some information about the point of no return and why we missed it) then I
think we should expose that with an improved drm_event and clear
semantics.

If we start to encourage compositors to use tracepoints to figure out why
the kernel doesn't behave (TEST_ONLY failure, or missed flip target), and
use that for behaviour, we've baked implementation details into the uapi.
And then we're screwed.

So if you have any need for timing information that you see solved with a
tracepoint, pls bring this up so we can add proper uapi. And yes I know
that if someone doesn't we still need to keep that tracepoint working, but
with all things uapi the question isn't whether we'll screw up (we will),
but how often. And less often is massively better :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Daniel Vetter
On Fri, Nov 08, 2019 at 10:16:59AM +0200, Pekka Paalanen wrote:
> On Thu,  7 Nov 2019 16:02:59 -0500
> Sean Paul  wrote:
> 
> > From: Sean Paul 
> > 
> > Hey all,
> > I'm back with another trace events patchset. My first attempt [1] went
> > better than expected, with enthusiasm for the idea and distain for the
> > implementation.
> > 
> > As promised, I went through and added proper trace events.
> > 
> > Before I get _too_ far, I wanted to post this set to get feedback on the
> > direction I'm going. I've gone back and forth on whether to introduce a
> > bunch of trace systems vs using the trace class enum. I've settled on
> > the trace class enum since it seems more extensible and easier to use in
> > production that just having the blunt "enable this system", or
> > the tedious "enable each event I'm interested in individually" methods.
> > 
> > So, consider this one an RFC unless we're all in agreement, in which
> > case we should apply it :)
> >
> 
> Hi Sean,
> 
> thanks for working on this. I'm not at all familiar with the trace
> infrastructure, so I'd like to know how flight recorder type of
> behaviour (a continuously written fixed size ring buffer) can be
> achieved. Let us have a display server in userspace which in production
> hits an unexpected condition with DRM and needs to record information
> for post-mortem debugging. What should it do to have a trace log after
> the fact to attach with a bug report?

Yeah I think this has the uapi smell, so would need the userspace side
too, once we agree on something.

> I assume that is a major use case for this new feature. Is performance
> profiling another use case?
> 
> Is it ok to build userspace to rely on these trace events during normal
> operations, e.g. for continuous adjustment of timings/timers?

Yeah I'm kinda freaked out about this. In the old thread we discussed the
question of whether dumping dmesg into tracepoints would make them uapi,
but no conclusion was reached.

Now instead we're going to duplicate a lot of these debug outputs, with
specific trace events, which is going to make them even more uapi. And a
lot of these are very much implementation details (drivers might change
their opinion on which other states they need for check/commit, e.g. if
they move a bunch of global state from crtcs into a new driver private
internal object). Explicit tracepoints with symbol value imo will have a
much higher chance that someone starts relying on them.

Also, dmesg is uapi too. Except there's a gentlemen's agreement that you
should depend upon it because it will make kernel engineers really pissed
if they can't change/move a specific string. So from that point I think
just dumping the existing debug strings as unstructured strings into trace
buffers is a much better idea. Plus no need to have the same information
duplicated in both a debug output and a tracepoint.

I think even slightly structured debug tracepoints (e.g. a simple
(drm_object_id, string) pair) would be too structured since then userspace
could start watching for events on specific things and maybe use that for
their retry loops after TEST_ONLY failures. Even though we have a lot of
debug output strings where that bit of structure is essentially there
already.

Aside from this a question on process ... I re-read the old thread and I'm
not sure how you concluded we've reached consensus on this here, seemed
all pretty inconclusive on the details and discussions died out?
-Daniel


> 
> 
> Thanks,
> pq
> 
> 
> > 
> > [1]- https://patchwork.freedesktop.org/patch/335350/
> > 
> > Sean Paul (6):
> >   drm: trace: Make the vblank queued/delivered events classed
> >   drm: trace: Introduce drm_trace() and trace event classes
> >   drm: trace: Add trace events for atomic state lifetime
> >   drm: trace: Add crtc state trace events
> >   drm: trace: Add connector state tracing
> >   drm: trace: Add plane state tracing
> > 
> >  Documentation/gpu/drm-internals.rst |   9 +
> >  drivers/gpu/drm/drm_atomic.c|  61 ++-
> >  drivers/gpu/drm/drm_trace.h | 563 ++--
> >  drivers/gpu/drm/drm_vblank.c|   8 +-
> >  4 files changed, 609 insertions(+), 32 deletions(-)
> > 
> 



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH 0/6] drm: trace: Introduce drm_trace() and instrument drm_atomic.c

2019-11-08 Thread Pekka Paalanen
On Thu,  7 Nov 2019 16:02:59 -0500
Sean Paul  wrote:

> From: Sean Paul 
> 
> Hey all,
> I'm back with another trace events patchset. My first attempt [1] went
> better than expected, with enthusiasm for the idea and distain for the
> implementation.
> 
> As promised, I went through and added proper trace events.
> 
> Before I get _too_ far, I wanted to post this set to get feedback on the
> direction I'm going. I've gone back and forth on whether to introduce a
> bunch of trace systems vs using the trace class enum. I've settled on
> the trace class enum since it seems more extensible and easier to use in
> production that just having the blunt "enable this system", or
> the tedious "enable each event I'm interested in individually" methods.
> 
> So, consider this one an RFC unless we're all in agreement, in which
> case we should apply it :)
>

Hi Sean,

thanks for working on this. I'm not at all familiar with the trace
infrastructure, so I'd like to know how flight recorder type of
behaviour (a continuously written fixed size ring buffer) can be
achieved. Let us have a display server in userspace which in production
hits an unexpected condition with DRM and needs to record information
for post-mortem debugging. What should it do to have a trace log after
the fact to attach with a bug report?

I assume that is a major use case for this new feature. Is performance
profiling another use case?

Is it ok to build userspace to rely on these trace events during normal
operations, e.g. for continuous adjustment of timings/timers?


Thanks,
pq


> 
> [1]- https://patchwork.freedesktop.org/patch/335350/
> 
> Sean Paul (6):
>   drm: trace: Make the vblank queued/delivered events classed
>   drm: trace: Introduce drm_trace() and trace event classes
>   drm: trace: Add trace events for atomic state lifetime
>   drm: trace: Add crtc state trace events
>   drm: trace: Add connector state tracing
>   drm: trace: Add plane state tracing
> 
>  Documentation/gpu/drm-internals.rst |   9 +
>  drivers/gpu/drm/drm_atomic.c|  61 ++-
>  drivers/gpu/drm/drm_trace.h | 563 ++--
>  drivers/gpu/drm/drm_vblank.c|   8 +-
>  4 files changed, 609 insertions(+), 32 deletions(-)
> 



pgplqDcA6PUHD.pgp
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel