On Fri, Dec 6, 2013 at 4:53 PM, Peter Geoghegan <p...@heroku.com> wrote:
> I had considered that something like Intel Speedstep technology had a
> role here, but I'm pretty sure it steps up very aggressively when
> things are CPU bound - I tested that against a Core 2 Duo desktop a
> couple of years back, where it was easy to immediately provoke it by
> moving around desktop windows or something.

I decided to increase the default CPU governor from "ondemand" to
"performance" for each of the 8 logical cores on this system. I then
re-ran the benchmark. I saw markedly better, much more *consistent*
performance for master [1].

I Googled for clues, and found this:

https://communities.intel.com/community/datastack/blog/2013/08/05/how-to-maximise-cpu-performance-for-the-oracle-database-on-linux

(It happens to mention Oracle, but I think it would equally well apply
to any database). I strongly suspect this is down to Kernel version. I
should highlight this:

"""
Another further CPU setting is the Energy/Performance Bias and Red Hat
and Oracle users should note that the default setting has changed in
the Linux kernel used between the releases of Red Hat/Oracle Linux 5
and Red Hat/Oracle Linux 6. (Some system BIOS options may include a
setting to prevent the OS changing this value). In release 5 Linux did
not set a value for this setting and therefore the value remained at 0
for a bias towards performance. In Red Hat 6 this behaviour has
changed and the default sets a median range to move this bias more
towards conserving energy (remember the same Linux kernel is present
in both ultrabooks as well as  servers and on my ultrabook I use
powertop and the other Linux tools and configurations discussed here
to maximise battery life) and reports the following in the dmesg
output on boot.

...

You can also use the tool to set a lower value to change the bias
entirely towards performance (the default release 5 behaviour).
"""

If there is regression in Postgres performance on more recent Linux
kernels [2], perhaps this is it. I certainly don't recall hearing
advice on this from the usual places. I'm surprised that turbo boost
mode wasn't enabled very quickly on a workload like this. It makes a
*huge* difference - at 4 clients (one per physical core), setting the
CPU governor to "performance" increases TPS by a massive 40% compared
to some earlier, comparable runs of master. These days Redhat are even
pointing out that CPU governor policy can be set via cron jobs [3].

I cannot account for why the original benchmark performed was
consistent with the patch having helped to such a large degree, given
the large number of runs involved, and their relatively long duration
for a CPU/memory bound workload. As I said, this machine is on
dedicated hardware, and virtualization was not used. However, at this
point I have no choice but to withdraw it from consideration, and not
pursue this any further. Sorry for the noise.

[1] http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/turbo/

[2] http://www.postgresql.org/message-id/529f7d58.1060...@agliodbs.com

[3] 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/cpufreq_governors.html
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to