On Tue, Jul 10, 2012 at 6:35 AM, Michael Tokarev m...@tls.msk.ru wrote:
On 03.07.2012 00:25, Andrew Hunter wrote:
diff --git a/include/linux/hash.h b/include/linux/hash.h
index b80506b..daabc3d 100644
--- a/include/linux/hash.h
+++ b/include/linux/hash.h
@@ -34,7 +34,9 @@
static inline u64
configurations but not in
any particularly exotic hardware/config.
From e6bf354c05d98651e8c27f96582f0ab56992e58a Mon Sep 17 00:00:00 2001
From: Andrew Hunter a...@google.com
Date: Tue, 16 Jul 2013 16:50:36 -0700
Subject: [PATCH] x86: avoid per_cpu for APIC id tables
DEFINE_PER_CPU(var) and friends go
whatever` does not blow up and produces
results that aren't hugely obviously wrong. I'm not sure how to run
particularly good tests of perf code, but this should not produce any
functional change whatsoever.
Signed-off-by: Andrew Hunter a...@google.com
Reviewed-by: Stephane Eranian eran
by TICK_USEC every iteration */
printf(previous value: %ld %ld %ld %ld\n,
prev.it_interval.tv_sec, prev.it_interval.tv_usec,
prev.it_value.tv_sec, prev.it_value.tv_usec);
setitimer(ITIMER_PROF, prev, NULL);
}
return 0;
}
Signed-off-by: Andrew Hunter
On Wed, Sep 3, 2014 at 5:06 PM, John Stultz john.stu...@linaro.org wrote:
Maybe with the next version of the patch, before you get into the
unwinding the math, you might practically describe what is broken,
then explain how its broken.
Done.
My quick read here is that we're converting a
kernels, this goes up by TICK_USEC every iteration */
printf(previous value: %ld %ld %ld %ld\n,
prev.it_interval.tv_sec, prev.it_interval.tv_usec,
prev.it_value.tv_sec, prev.it_value.tv_usec);
setitimer(ITIMER_PROF, prev, NULL);
}
return 0;
}
Signed-off-by: Andrew
On Thu, Sep 4, 2014 at 2:36 PM, Paul Turner p...@google.com wrote:
On Thu, Sep 4, 2014 at 2:30 PM, John Stultz john.stu...@linaro.org wrote:
This seems to be a quite old bug.. Do you think this is needed for -stable?
Seems reasonable to me.
I have no opinion: backport or don't at your
On Fri, May 22, 2015 at 1:53 PM, Andy Lutomirski l...@amacapital.net wrote:
Create an array of user-managed locks, one per cpu. Call them lock[i]
for 0 = i ncpus.
To acquire, look up your CPU number. Then, atomically, check that
lock[cpu] isn't held and, if so, mark it held and record both
On Wed, Jan 27, 2016 at 9:36 AM, Mathieu Desnoyers
wrote:
> - On Jan 27, 2016, at 12:24 PM, Thomas Gleixner t...@linutronix.de wrote:
>
>> On Wed, 27 Jan 2016, Josh Triplett wrote:
>>> With the dynamic allocation removed, this seems sensible to me. One
>>>
At Google, we essentially reverted 88ec22d and the subsequent tweaks
to it, keeping vruntime absolute always , instead using
task_move_group_fair to change the basis of relative min_vruntime
between cpus. We found this made it a lot easier to reason about and
work with corss-cpu computations. I
On Thu, Jul 27, 2017 at 11:12 AM, Paul E. McKenney
wrote:
> Hello!
> But my main question is whether the throttling shown below is acceptable
> for your use cases, namely only one expedited sys_membarrier() permitted
> per scheduling-clock period (1 millisecond on many
On Thu, Jul 27, 2017 at 12:06 PM, Paul E. McKenney
wrote:
> IPIin only those CPUs running threads in the same process as the
> thread invoking membarrier() would be very nice! There is some LKML
> discussion on this topic, which is currently circling around making
On Thu, Jul 27, 2017 at 12:43 PM, Paul E. McKenney
wrote:
> On Thu, Jul 27, 2017 at 10:20:14PM +0300, Avi Kivity wrote:
>> IPIing only running threads of my process would be perfect. In fact
>> I might even be able to make use of "membarrier these threads
>> please" to
ternal configurations but not in
any particularly exotic hardware/config.
>From e6bf354c05d98651e8c27f96582f0ab56992e58a Mon Sep 17 00:00:00 2001
From: Andrew Hunter
Date: Tue, 16 Jul 2013 16:50:36 -0700
Subject: [PATCH] x86: avoid per_cpu for APIC id tables
DEFINE_PER_CPU(var) and friends go t
whatever` does not blow up and produces
results that aren't hugely obviously wrong. I'm not sure how to run
particularly good tests of perf code, but this should not produce any
functional change whatsoever.
Signed-off-by: Andrew Hunter
Reviewed-by: Stephane Eranian
---
arch/x86/kernel/cpu
On Thu, Jul 27, 2017 at 11:12 AM, Paul E. McKenney
wrote:
> Hello!
> But my main question is whether the throttling shown below is acceptable
> for your use cases, namely only one expedited sys_membarrier() permitted
> per scheduling-clock period (1 millisecond on many platforms), with any
>
On Thu, Jul 27, 2017 at 12:43 PM, Paul E. McKenney
wrote:
> On Thu, Jul 27, 2017 at 10:20:14PM +0300, Avi Kivity wrote:
>> IPIing only running threads of my process would be perfect. In fact
>> I might even be able to make use of "membarrier these threads
>> please" to reduce IPIs, when I change
On Thu, Jul 27, 2017 at 12:06 PM, Paul E. McKenney
wrote:
> IPIin only those CPUs running threads in the same process as the
> thread invoking membarrier() would be very nice! There is some LKML
> discussion on this topic, which is currently circling around making this
> determination reliable
On Tue, Jul 10, 2012 at 6:35 AM, Michael Tokarev wrote:
> On 03.07.2012 00:25, Andrew Hunter wrote:
>> diff --git a/include/linux/hash.h b/include/linux/hash.h
>> index b80506b..daabc3d 100644
>> --- a/include/linux/hash.h
>> +++ b/include/linux/hash.h
>> @@ -34
On Wed, Sep 3, 2014 at 5:06 PM, John Stultz wrote:
> Maybe with the next version of the patch, before you get into the
> unwinding the math, you might practically describe what is broken,
> then explain how its broken.
>
Done.
> My quick read here is that we're converting a timespec -> jiffies,
this goes up by TICK_USEC every iteration */
printf("previous value: %ld %ld %ld %ld\n",
prev.it_interval.tv_sec, prev.it_interval.tv_usec,
prev.it_value.tv_sec, prev.it_value.tv_usec);
setitimer(ITIMER_PROF, , NULL);
}
return 0;
}
Signed-off-by: A
On Thu, Sep 4, 2014 at 2:36 PM, Paul Turner wrote:
> On Thu, Sep 4, 2014 at 2:30 PM, John Stultz wrote:
>> This seems to be a quite old bug.. Do you think this is needed for -stable?
>
> Seems reasonable to me.
>
I have no opinion: backport or don't at your preference.
--
To unsubscribe from
by TICK_USEC every iteration */
printf("previous value: %ld %ld %ld %ld\n",
prev.it_interval.tv_sec, prev.it_interval.tv_usec,
prev.it_value.tv_sec, prev.it_value.tv_usec);
setitimer(ITIMER_PROF, , NULL);
}
return 0;
}
Signed-off-by: Andrew Hunter
Re
On Fri, May 22, 2015 at 1:53 PM, Andy Lutomirski wrote:
> Create an array of user-managed locks, one per cpu. Call them lock[i]
> for 0 <= i < ncpus.
>
> To acquire, look up your CPU number. Then, atomically, check that
> lock[cpu] isn't held and, if so, mark it held and record both your tid
>
On Wed, Jan 27, 2016 at 9:36 AM, Mathieu Desnoyers
wrote:
> - On Jan 27, 2016, at 12:24 PM, Thomas Gleixner t...@linutronix.de wrote:
>
>> On Wed, 27 Jan 2016, Josh Triplett wrote:
>>> With the dynamic allocation removed, this seems sensible to me. One
>>> minor nit: s/int32_t/uint32_t/g,
At Google, we essentially reverted 88ec22d and the subsequent tweaks
to it, keeping vruntime absolute always , instead using
task_move_group_fair to change the basis of relative min_vruntime
between cpus. We found this made it a lot easier to reason about and
work with corss-cpu computations. I
Commit-ID: 43b4578071c0e6d87761e113e05d45776cc75437
Gitweb: http://git.kernel.org/tip/43b4578071c0e6d87761e113e05d45776cc75437
Author: Andrew Hunter a...@google.com
AuthorDate: Thu, 23 May 2013 11:07:03 -0700
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Wed, 19 Jun 2013 12:50:44
Commit-ID: 43b4578071c0e6d87761e113e05d45776cc75437
Gitweb: http://git.kernel.org/tip/43b4578071c0e6d87761e113e05d45776cc75437
Author: Andrew Hunter
AuthorDate: Thu, 23 May 2013 11:07:03 -0700
Committer: Ingo Molnar
CommitDate: Wed, 19 Jun 2013 12:50:44 +0200
perf/x86: Reduce stack
28 matches
Mail list logo