----- On Feb 3, 2018, at 4:08 PM, rostedt rost...@goodmis.org wrote:
> On Sat, 3 Feb 2018 12:52:08 -0800
> Alexei Starovoitov <alexei.starovoi...@gmail.com> wrote:
>> On Sat, Feb 03, 2018 at 02:02:17PM -0500, Steven Rostedt wrote:
>> > From those that were asking about having "trace markers" (ie.
>> > Facebook), they told us they can cope with kernel changes.
>> There is some misunderstanding here.
>> We never asked for this interface.
> But you wanted trace markers? Just to confirm.
>> We're perfectly fine with existing kprobe/tracepoint+bpf.
> OK, so no new development in this was wanted? So the entire talk about
> getting tracepoints into vfs and scheduling wasn't needed?
I did presentations in the past months about the need to add some
tracepoints into scheduling and IPI code on x86.
Instrumentation of IPI is needed not only by kernel developers, but also
by tools targeting sysadmins/app developers to help them figure out where
the time is spent when they are hitting unexpected long latencies in their
system. We can indeed start by using function instrumentation to show the
usefulness of this instrumentation, but I expect that we'll end up adding
a tracepoint there eventually.
Tracepoints in scheduling also falls in the category of letting sysadmins
and app developers understand where time is spent on their system. When
they hit an unexpected long latency, they want to understand what is
wrong in their task priority and scheduling policy that led to this delay.
The data extracted from the scheduler today is not sufficient to achieve
this. So this is another case where we might see kernel developers using
function instrumentation initially, but we'll probably end up adding and/or
changing tracepoints to help users out there who need tools analyzing this