On 2019/7/3 19:58, Jan Glauber wrote:
> Hi Alex,
> I've tried this series on arm64 (ThunderX2 with up to SMT=4 and 224 CPUs)
> with the borderline testcase of accessing a single file from all
> threads. With that
> testcase the qspinlock slowpath is the top spot in the kernel.
>
> The results loo
Hi Alex,
I've tried this series on arm64 (ThunderX2 with up to SMT=4 and 224 CPUs)
with the borderline testcase of accessing a single file from all
threads. With that
testcase the qspinlock slowpath is the top spot in the kernel.
The results look really promising:
CPUsnormalnuma-qspinloc
> On Apr 1, 2019, at 5:09 AM, Peter Zijlstra wrote:
>
> On Fri, Mar 29, 2019 at 11:20:01AM -0400, Alex Kogan wrote:
>> The following locktorture results are from an Oracle X5-4 server
>> (four Intel Xeon E7-8895 v3 @ 2.60GHz sockets with 18 hyperthreaded
>> cores each).
>
> The other interest
On Fri, Mar 29, 2019 at 11:20:01AM -0400, Alex Kogan wrote:
> The following locktorture results are from an Oracle X5-4 server
> (four Intel Xeon E7-8895 v3 @ 2.60GHz sockets with 18 hyperthreaded
> cores each).
The other interesting number is on a !NUMA machine. What do these
patches do there? R
This version addresses feedback from Peter and Waiman. In particular,
the CNA functionality has been moved to a separate file, and is controlled
by a config option (enabled by default if NUMA is enabled).
An optimization has been introduced to reduce the overhead of shuffling
threads between wait
5 matches
Mail list logo