Robert,

> Actually, I'd walk through fire for a 10% performance improvement if
> it meant only a *risk* to stability.

Depends on the degree of risk.  MMAP has the potential to introduce instability 
into areas of the code which have been completely reliable for years.  Adding 
20 new coredump cases with data loss for a 10% improvement seems like a poor 
bargain to me.  It doesn't help that the only DB to rely heavily on MMAP 
(MongoDB) is OSSDB's paragon of data loss.

However, in the case where the database is larger than RAM ... or better, 90% 
of RAM ... MMAP has the theoretical potential to improve performance quite a 
bit more than 10% ... try up to 900% on some queries.  However, I'd like to 
prove that in a test before we bother even debating the fundamental obstacles 
to using MMAP.  It's possible that these theoretical performance benefits will 
not materialize, even without data safeguards.
 
> The problem is that this is
> likely unfixably broken. In particular, I think the first sentence of
> Tom's response hit it right on the nose, and mirrors my own thoughts
> on the subject. To have any chance of working, you'd need to track
> buffer pins and shared/exclusive content locks for the pages that were
> being accessed outside of shared buffers; otherwise someone might be
> looking at a stale copy of the page.

Nothing is unfixable.  The question is whether it's worth the cost.  Let me see 
if I can build a tree with Radislaw's patch, and do some real performance tests.

I, for one, am glad he did this work.  We've discussed MMAP in the code off and 
on for years, but nobody wanted to do the work to test it.  Now someone has, 
and we can decide whether it's worth pursuing based on the numbers.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to