On Sat, 27 Nov 2010 14:27:12 -0500
Tom Lane t...@sss.pgh.pa.us wrote:
And the bottom line is: if there's any performance benefit at all,
it's on the order of 1%. The best result I got was about 3200 TPS
with hugepages, and about 3160 without. The noise in these numbers
is more than 1%
Jonathan Corbet cor...@lwn.net writes:
Just a quick note: I can't hazard a guess as to why you're not getting
better results than you are, but I *can* say that putting together a
production-quality patch may not be worth your effort regardless. There
is a nice transparent hugepages patch set
On Mon, Nov 29, 2010 at 10:30 AM, Jonathan Corbet cor...@lwn.net wrote:
On Sat, 27 Nov 2010 14:27:12 -0500
Tom Lane t...@sss.pgh.pa.us wrote:
And the bottom line is: if there's any performance benefit at all,
it's on the order of 1%. The best result I got was about 3200 TPS
with hugepages,
On Sat, 2010-11-27 at 14:27 -0500, Tom Lane wrote:
This is discouraging; it certainly doesn't make me want to expend the
effort to develop a production patch.
Perhaps.
Why do this only for shared memory? Surely the majority of memory
accesses are to private memory, so being able to allocate
Simon Riggs si...@2ndquadrant.com writes:
On Sat, 2010-11-27 at 14:27 -0500, Tom Lane wrote:
This is discouraging; it certainly doesn't make me want to expend the
effort to develop a production patch.
Perhaps.
Why do this only for shared memory?
There's no exposed API for causing a
On Sun, 2010-11-28 at 12:04 -0500, Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
On Sat, 2010-11-27 at 14:27 -0500, Tom Lane wrote:
This is discouraging; it certainly doesn't make me want to expend the
effort to develop a production patch.
Perhaps.
Why do this only for
Simon Riggs si...@2ndquadrant.com writes:
On Sun, 2010-11-28 at 12:04 -0500, Tom Lane wrote:
There's no exposed API for causing a process's regular memory to become
hugepages.
We could make all the palloc stuff into shared memory also (private
shared memory that is). We're not likely to run
On Sun, Nov 28, 2010 at 02:32:04PM -0500, Tom Lane wrote:
Sure, but 4MB of memory is enough to require 1000 TLB entries, which is
more than enough to blow the TLB even on a Nehalem.
That can't possibly be right. I'm sure the chip designers have heard of
programs using more than 4MB.
On Sat, Nov 27, 2010 at 02:27:12PM -0500, Tom Lane wrote:
We've gotten a few inquiries about whether Postgres can use huge pages
under Linux. In principle that should be more efficient for large shmem
regions, since fewer TLB entries are needed to support the address
space. I spent a bit of
Kenneth Marshall k...@rice.edu writes:
On Sat, Nov 27, 2010 at 02:27:12PM -0500, Tom Lane wrote:
... A bigger problem is that the shmem request size must be a
multiple of the system's hugepage size, which is *not* a constant
even though the test patch just uses 2MB as the assumed value. For a
On Mon, Nov 29, 2010 at 12:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I would expect that you can just iterate through the size possibilities
pretty quickly and just use the first one that works -- no /proc
groveling.
It's not really that easy, because (at least on the kernel version I
Greg Stark gsst...@mit.edu writes:
On Mon, Nov 29, 2010 at 12:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Really you do want to scrape the value.
Couldn't we just round the shared memory allocation down to a multiple
of 4MB? That would handle all older architectures where the size is
2MB or
We've gotten a few inquiries about whether Postgres can use huge pages
under Linux. In principle that should be more efficient for large shmem
regions, since fewer TLB entries are needed to support the address
space. I spent a bit of time today looking into what that would take.
My testing was
On Sat, Nov 27, 2010 at 2:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
For testing purposes, I figured that what I wanted to stress was
postgres process swapping and shmem access. I built current git HEAD
with --enable-debug and no other options, and tested with these
non-default settings:
14 matches
Mail list logo