On Mon, Nov 19, 2018 at 10:12:57AM +0100, Jiri Olsa wrote:
> On Mon, Nov 19, 2018 at 02:26:03PM +0900, Namhyung Kim wrote:
> > Hi Jirka
> >
> > Sorry for late!
> >
> > On Tue, Nov 06, 2018 at 12:54:36PM +0100, Jiri Olsa wrote:
> > > On Mon, Nov 05, 2018 at 08:53:42PM -0800, David Miller wrote:
>
On Sun, Nov 18, 2018 at 08:52:43PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Tue, 13 Nov 2018 11:40:54 +0100
>
> > I pushed/rebased what I have to perf/fixes branch again
> >
> > please note I had to change our compile changes, because
> > they wouldn't compile on x86, but I can't ver
On Mon, Nov 19, 2018 at 02:26:03PM +0900, Namhyung Kim wrote:
> Hi Jirka
>
> Sorry for late!
>
> On Tue, Nov 06, 2018 at 12:54:36PM +0100, Jiri Olsa wrote:
> > On Mon, Nov 05, 2018 at 08:53:42PM -0800, David Miller wrote:
> > >
> > > Jiri,
> > >
> > > Because you now run queued_events__queue()
On Sun, Nov 18, 2018 at 10:33:55PM -0800, David Miller wrote:
> From: Namhyung Kim
> Date: Mon, 19 Nov 2018 15:28:37 +0900
>
> > Hello David,
> >
> > On Sun, Nov 18, 2018 at 08:52:43PM -0800, David Miller wrote:
> >> From: Jiri Olsa
> >> Date: Tue, 13 Nov 2018 11:40:54 +0100
> >>
> >> > I push
From: Namhyung Kim
Date: Mon, 19 Nov 2018 15:28:37 +0900
> Hello David,
>
> On Sun, Nov 18, 2018 at 08:52:43PM -0800, David Miller wrote:
>> From: Jiri Olsa
>> Date: Tue, 13 Nov 2018 11:40:54 +0100
>>
>> > I pushed/rebased what I have to perf/fixes branch again
>> >
>> > please note I had to
Hello David,
On Sun, Nov 18, 2018 at 08:52:43PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Tue, 13 Nov 2018 11:40:54 +0100
>
> > I pushed/rebased what I have to perf/fixes branch again
> >
> > please note I had to change our compile changes, because
> > they wouldn't compile on x86, b
Hi Jirka
Sorry for late!
On Tue, Nov 06, 2018 at 12:54:36PM +0100, Jiri Olsa wrote:
> On Mon, Nov 05, 2018 at 08:53:42PM -0800, David Miller wrote:
> >
> > Jiri,
> >
> > Because you now run queued_events__queue() lockless with that condvar
> > trick, it is possible for top->qe.in to be seen as
From: Jiri Olsa
Date: Tue, 13 Nov 2018 11:40:54 +0100
> I pushed/rebased what I have to perf/fixes branch again
>
> please note I had to change our compile changes, because
> they wouldn't compile on x86, but I can't verify on sparc,
> so you might see some compile fails again
I just checked yo
On Sun, Nov 11, 2018 at 03:32:59PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Mon, 12 Nov 2018 00:26:27 +0100
>
> > On Sun, Nov 11, 2018 at 03:08:01PM -0800, David Miller wrote:
> >> From: Jiri Olsa
> >> Date: Sun, 11 Nov 2018 20:41:32 +0100
> >>
> >> > On Thu, Nov 08, 2018 at 05:07:2
From: Jiri Olsa
Date: Mon, 12 Nov 2018 00:26:27 +0100
> On Sun, Nov 11, 2018 at 03:08:01PM -0800, David Miller wrote:
>> From: Jiri Olsa
>> Date: Sun, 11 Nov 2018 20:41:32 +0100
>>
>> > On Thu, Nov 08, 2018 at 05:07:21PM -0800, David Miller wrote:
>> >> From: Jiri Olsa
>> >> Date: Thu, 8 Nov 2
On Sun, Nov 11, 2018 at 03:08:01PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Sun, 11 Nov 2018 20:41:32 +0100
>
> > On Thu, Nov 08, 2018 at 05:07:21PM -0800, David Miller wrote:
> >> From: Jiri Olsa
> >> Date: Thu, 8 Nov 2018 08:13:03 +0100
> >>
> >> > we could separated fork/mmaps to
From: Jiri Olsa
Date: Sun, 11 Nov 2018 20:41:32 +0100
> I added the dropping logic, it's simple so far..
How do you maintain your perf/fixes branch? Do you rebase? :-/
I just pulled after a previous pull and got nothing but conflicts on
every single file.
From: Jiri Olsa
Date: Sun, 11 Nov 2018 20:41:32 +0100
> On Thu, Nov 08, 2018 at 05:07:21PM -0800, David Miller wrote:
>> From: Jiri Olsa
>> Date: Thu, 8 Nov 2018 08:13:03 +0100
>>
>> > we could separated fork/mmaps to separate dummy event map, or just
>> > parse them out in the read thread and
From: Jiri Olsa
Date: Sun, 11 Nov 2018 23:43:36 +0100
> On Sun, Nov 11, 2018 at 02:32:08PM -0800, David Miller wrote:
>> From: Jiri Olsa
>> Date: Sun, 11 Nov 2018 20:41:32 +0100
>>
>> > I added the dropping logic, it's simple so far..
>>
>> How do you maintain your perf/fixes branch? Do you r
On Sun, Nov 11, 2018 at 02:32:08PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Sun, 11 Nov 2018 20:41:32 +0100
>
> > I added the dropping logic, it's simple so far..
>
> How do you maintain your perf/fixes branch? Do you rebase? :-/
>
> I just pulled after a previous pull and got noth
On Sun, Nov 11, 2018 at 08:41:32PM +0100, Jiri Olsa wrote:
> On Thu, Nov 08, 2018 at 05:07:21PM -0800, David Miller wrote:
> > From: Jiri Olsa
> > Date: Thu, 8 Nov 2018 08:13:03 +0100
> >
> > > we could separated fork/mmaps to separate dummy event map, or just
> > > parse them out in the read thr
On Thu, Nov 08, 2018 at 05:07:21PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Thu, 8 Nov 2018 08:13:03 +0100
>
> > we could separated fork/mmaps to separate dummy event map, or just
> > parse them out in the read thread and create special queue for them
> > and drop just samples in case
From: Jiri Olsa
Date: Thu, 8 Nov 2018 08:13:03 +0100
> we could separated fork/mmaps to separate dummy event map, or just
> parse them out in the read thread and create special queue for them
> and drop just samples in case we are behind
What you say at the end here is basically what I am propos
On Wed, Nov 07, 2018 at 12:01:54PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Wed, 7 Nov 2018 20:43:44 +0100
>
> > I pushed new version in my perf/fixes branch
>
> Thanks, I'll check it out later today for sure! This is pretty exciting
> work.
>
> Just some random thoughts as I've be
From: Arnaldo Carvalho de Melo
Date: Wed, 7 Nov 2018 17:28:15 -0300
> So perhaps we should tell the kernel that is ok to lose SAMPLEs but not
> the other events, and make userspace ask for PERF_RECORD_!SAMPLE in all
> ring buffers? Duplication wouldn't be that much of a problem?
I think we shoul
Em Wed, Nov 07, 2018 at 12:01:54PM -0800, David Miller escreveu:
> From: Jiri Olsa
> Date: Wed, 7 Nov 2018 20:43:44 +0100
>
> > I pushed new version in my perf/fixes branch
>
> Thanks, I'll check it out later today for sure! This is pretty exciting
> work.
>
> Just some random thoughts as I've
From: Jiri Olsa
Date: Wed, 7 Nov 2018 20:43:44 +0100
> I pushed new version in my perf/fixes branch
Thanks, I'll check it out later today for sure! This is pretty exciting
work.
Just some random thoughts as I've been thinking about this whole
situation a lot lately:
Something to consider migh
On Wed, Nov 07, 2018 at 09:32:17AM +0100, Jiri Olsa wrote:
> On Tue, Nov 06, 2018 at 10:13:49PM -0800, David Miller wrote:
> > From: Jiri Olsa
> > Date: Tue, 6 Nov 2018 21:42:55 +0100
> >
> > > I pushed that fix in perf/fixes branch, but I'm still occasionaly
> > > hitting the namespace crash.. w
On Tue, Nov 06, 2018 at 10:13:49PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Tue, 6 Nov 2018 21:42:55 +0100
>
> > I pushed that fix in perf/fixes branch, but I'm still occasionaly
> > hitting the namespace crash.. working on it ;-)
>
> Jiri, how can this new scheme work without settin
From: Jiri Olsa
Date: Tue, 6 Nov 2018 21:42:55 +0100
> I pushed that fix in perf/fixes branch, but I'm still occasionaly
> hitting the namespace crash.. working on it ;-)
Jiri, how can this new scheme work without setting copy_on_queue
for the queued_events we use here?
I don't see copy_on_queu
On Mon, Nov 05, 2018 at 08:53:42PM -0800, David Miller wrote:
>
> Jiri,
>
> Because you now run queued_events__queue() lockless with that condvar
> trick, it is possible for top->qe.in to be seen as one past the data[]
> array, this is because the rotate_queues() code goes:
>
> if (++top->
On Mon, Nov 05, 2018 at 08:53:42PM -0800, David Miller wrote:
>
> Jiri,
>
> Because you now run queued_events__queue() lockless with that condvar
> trick, it is possible for top->qe.in to be seen as one past the data[]
> array, this is because the rotate_queues() code goes:
>
> if (++top->
On Mon, Nov 05, 2018 at 07:45:42PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Mon, 5 Nov 2018 21:34:47 +0100
>
> > I pushed it in perf/fixes branch in:
> > git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
>
> Build fix for sparc below, I'm playing with this now.
>
> perf
Jiri,
Because you now run queued_events__queue() lockless with that condvar
trick, it is possible for top->qe.in to be seen as one past the data[]
array, this is because the rotate_queues() code goes:
if (++top->qe.in > &top->qe.data[1])
top->qe.in = &top->qe.data[0];
S
From: David Miller
Date: Mon, 05 Nov 2018 19:45:42 -0800 (PST)
> Build fix for sparc below, I'm playing with this now.
I get various assertion failures and crashes during make -j128 kernel
builds on my sparc64 box:
perf: Segmentation fault
backtrace
/lib/s
From: Jiri Olsa
Date: Mon, 5 Nov 2018 21:34:47 +0100
> I pushed it in perf/fixes branch in:
> git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
Build fix for sparc below, I'm playing with this now.
perf: Use PRIu64 for printing top lost events count.
Signed-off-by: David S. Miller
From: Jiri Olsa
Date: Mon, 5 Nov 2018 21:34:47 +0100
> I put something together.. still testing, but it's already
> showing 0 lost events when for 'overwrite = 0' case even
> for high load.. the old code shows ~1500 for same workload
>
> I'm printing lost event counts in stdio output header:
>
On Sun, Nov 04, 2018 at 04:50:39PM -0800, David Miller wrote:
> From: Jiri Olsa
> Date: Sun, 4 Nov 2018 21:18:21 +0100
>
> > do you have some code I could check on?
>
> All I have is this patch which parallelizes the mmap readers in perf
> top.
I put something together.. still testing, but it's
From: Jiri Olsa
Date: Sun, 4 Nov 2018 21:18:21 +0100
> do you have some code I could check on?
All I have is this patch which parallelizes the mmap readers in perf
top.
It's not complete and you need to add proper locking, particularly around
the machine__resolve() call.
diff --git a/tools/per
On Fri, Nov 02, 2018 at 11:30:03PM -0700, David Miller wrote:
> From: David Miller
> Date: Wed, 31 Oct 2018 09:08:16 -0700 (PDT)
>
> > From: Jiri Olsa
> > Date: Wed, 31 Oct 2018 16:39:07 +0100
> >
> >> it'd be great to make hist processing faster, but is your main target here
> >> to get the lo
From: David Miller
Date: Wed, 31 Oct 2018 09:08:16 -0700 (PDT)
> From: Jiri Olsa
> Date: Wed, 31 Oct 2018 16:39:07 +0100
>
>> it'd be great to make hist processing faster, but is your main target here
>> to get the load out of the reader thread, so we dont lose events during the
>> hist process
From: Jiri Olsa
Date: Wed, 31 Oct 2018 16:39:07 +0100
> it'd be great to make hist processing faster, but is your main target here
> to get the load out of the reader thread, so we dont lose events during the
> hist processing?
>
> we could queue events directly from reader thread into another t
On Wed, Oct 31, 2018 at 09:43:06AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Tue, Oct 30, 2018 at 10:03:28PM -0700, David Miller escreveu:
> >
> > So when a cpu is overpowered processing samples, most of the time is
> > spent in the histogram code.
it'd be great to make hist processing faster,
Em Tue, Oct 30, 2018 at 10:03:28PM -0700, David Miller escreveu:
>
> So when a cpu is overpowered processing samples, most of the time is
> spent in the histogram code.
>
> It seems we initialize a ~262 byte structure on the stack to do every
> histogram entry lookup.
>
> This is a side effect o
So when a cpu is overpowered processing samples, most of the time is
spent in the histogram code.
It seems we initialize a ~262 byte structure on the stack to do every
histogram entry lookup.
This is a side effect of how the sorting code is shared with the code
that does lookups and insertions
40 matches
Mail list logo