On May31, 2012, at 02:26 , Sergey Koposov wrote:
> On Thu, 31 May 2012, Florian Pflug wrote:
>> Wait, so performance *increased* by spreading the backends out over as many 
>> dies as possible, not by using as few as possible? That'd be exactly the 
>> opposite of what I'd have expected. (I'm assuming that cores on one die have 
>> ascending ids on linux. If you could post the contents of /proc/cpuinfo, we 
>> could verify that)
> 
> Yes, you are correct. And I also can confirm that the cpus in the cpuinfo are 
> ordered by "physical id" e.g. they go like
> 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3
> 
> I did a specific test with just 6 threads (== number of cores per cpu)
> and ran it on a single phys cpu, it took ~ 12 seconds for each thread,
> and when I tried to spread it across 4 cpus it took 7-9 seconds per thread. 
> But all these numbers are anyway significantly better then when I didn't use 
> taskset.

Hm. The only resource that is shared between different cores on a die is 
usually the last cache level (L2 or L3). So by spreading backends out over more 
cores, you're increasing the total cache size. Maybe that could explain the 
behavior you're seeing.

> Which probably means without it the processes were jumping from core to core 
> ? …

Seems so. I completely fail to understand why they would, though. Since you've 
got far fewer runnable processes than cores, why would the kernel ever see a 
reason to migrate a process from one core to another? I think that you can 
query the core that a process is running one from one of the files in 
/proc/<pid>. You could try gathering samples ever 50 ms or so during the test 
run - maybe that could shed some additional light on this.

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to