On Tue, Jul 3, 2012 at 1:46 PM, Josh Kupershmidt schmi...@gmail.com wrote:
On Tue, Jul 3, 2012 at 6:57 AM, Robert Haas robertmh...@gmail.com wrote:
Here's a patch that attempts to begin the work of adjusting the
documentation for this brave new world. I am guessing that there may
be other
On Thu, Jun 28, 2012 at 11:26 AM, Robert Haas robertmh...@gmail.com wrote:
Assuming things go well, there are a number of follow-on things that
we need to do finish this up:
1. Update the documentation. I skipped this for now, because I think
that what we write there is going to be heavily
On Wednesday, June 27, 2012 05:28:14 AM Robert Haas wrote:
On Tue, Jun 26, 2012 at 6:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Josh Berkus j...@agliodbs.com writes:
So let's fix the 80% case with something we feel confident in, and then
revisit the no-sysv interlock as a separate patch.
Andres Freund and...@2ndquadrant.com writes:
Btw, RhodiumToad/Andrew Gierth on irc talked about a reason why sysv shared
memory might be advantageous on some platforms. E.g. on freebsd there is the
kern.ipc.shm_use_phys setting which prevents paging out shared memory and
also
seems to
On Tue, Jul 3, 2012 at 11:36 AM, Andres Freund and...@2ndquadrant.com wrote:
Btw, RhodiumToad/Andrew Gierth on irc talked about a reason why sysv shared
memory might be advantageous on some platforms. E.g. on freebsd there is the
kern.ipc.shm_use_phys setting which prevents paging out shared
On Tue, Jul 3, 2012 at 5:36 PM, Andres Freund and...@2ndquadrant.com wrote:
On Wednesday, June 27, 2012 05:28:14 AM Robert Haas wrote:
On Tue, Jun 26, 2012 at 6:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Josh Berkus j...@agliodbs.com writes:
So let's fix the 80% case with something we feel
On Tuesday, July 03, 2012 05:41:09 PM Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
Btw, RhodiumToad/Andrew Gierth on irc talked about a reason why sysv
shared memory might be advantageous on some platforms. E.g. on freebsd
there is the kern.ipc.shm_use_phys setting which
Andres Freund and...@2ndquadrant.com writes:
On Tuesday, July 03, 2012 05:41:09 PM Tom Lane wrote:
I'd really rather not. If we're going to go in this direction, we
should just go there.
I don't really care, just wanted to bring up that at least one experienced
user would be disappointed
On Tue, Jul 3, 2012 at 6:57 AM, Robert Haas robertmh...@gmail.com wrote:
Here's a patch that attempts to begin the work of adjusting the
documentation for this brave new world. I am guessing that there may
be other places in the documentation that also require updating, and
this page probably
On Fri, Jun 29, 2012 at 04:03:40PM -0700, Daniel Farina wrote:
On Fri, Jun 29, 2012 at 1:00 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jun 29, 2012 at 2:52 PM, Andres Freund and...@2ndquadrant.com
wrote:
Hi All,
In a *very* quick patch I tested using huge pages/MAP_HUGETLB
On Fri, Jun 29, 2012 at 2:31 PM, Josh Berkus j...@agliodbs.com wrote:
My idea of not dedicated is I can launch a dozen postmasters on this
machine, and other services too, and it'll be okay as long as they're
not doing too much.
Oh, 128MB then?
Proposed patch attached.
--
Robert Haas
According to the Google, there is absolutely no way of gettIng MacOS X
not to overcommit like crazy.
Well, this is one of a long list of broken things about OSX. If you
want to see *real* breakage, do some IO performance testing of HFS+
FWIW, I have this issue with Mac desktop applications
Josh Berkus j...@agliodbs.com writes:
The other thing which will avoid the problem for most Mac users is if we
simply allocate 10% of RAM at initdb as a default. If we do that, then
90% of users will never touch Shmem themselves, and not have the
opportunity to mess up.
If we could do that
Tom,
If we could do that on *all* platforms, I might be for it, but we only
know how to get that number on some platforms.
I don't see what's wrong with using it where we can get it, and not
using it where we can't.
There's also the issue
of whether we really want to assume that the
Josh Berkus j...@agliodbs.com writes:
If we could do that on *all* platforms, I might be for it, but we only
know how to get that number on some platforms.
I don't see what's wrong with using it where we can get it, and not
using it where we can't.
Because then we still need to define, and
10% isn't assuming dedicated.
Really?
Yes. As I said, the allocation for dedicated PostgreSQL servers is
usually 20% to 25%, up to 8GB.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Josh Berkus j...@agliodbs.com writes:
10% isn't assuming dedicated.
Really?
Yes. As I said, the allocation for dedicated PostgreSQL servers is
usually 20% to 25%, up to 8GB.
Any percentage is assuming dedicated, IMO. 25% might be the more common
number, but you're still assuming that you
My idea of not dedicated is I can launch a dozen postmasters on this
machine, and other services too, and it'll be okay as long as they're
not doing too much.
Oh, 128MB then?
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list
Hi All,
In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the mmap'ed
memory.
That gives around 9.5% performance benefit in a read-only pgbench run (-n -S -
j 64 -c 64 -T 10 -M prepared, scale 200, 6GB s_b, 8 cores, 24GB mem).
It also saves a bunch of memory per process due to
On Fri, Jun 29, 2012 at 2:52 PM, Andres Freund and...@2ndquadrant.com wrote:
Hi All,
In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the mmap'ed
memory.
That gives around 9.5% performance benefit in a read-only pgbench run (-n -S -
j 64 -c 64 -T 10 -M prepared, scale 200,
On Fri, Jun 29, 2012 at 1:00 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jun 29, 2012 at 2:52 PM, Andres Freund and...@2ndquadrant.com wrote:
Hi All,
In a *very* quick patch I tested using huge pages/MAP_HUGETLB for the mmap'ed
memory.
That gives around 9.5% performance benefit in a
On Thu, Jun 28, 2012 at 7:00 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Would Posix shmem help with that at all?
On Thu, Jun 28, 2012 at 7:05 AM, Magnus Hagander mag...@hagander.net wrote:
Do we really need a runtime check for that? Isn't a configure check
enough? If they *do* deploy postgresql 9.3 on something that old,
they're building from source anyway...
[...]
Could we actually turn *that* into a
On Thu, Jun 28, 2012 at 6:05 AM, Magnus Hagander mag...@hagander.net wrote:
On Thu, Jun 28, 2012 at 7:00 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Jun 27, 2012 at 12:00 AM,
On Thu, Jun 28, 2012 at 9:47 AM, Jon Nelson jnelson+pg...@jamponi.net wrote:
Why not just mmap /dev/zero (MAP_SHARED but not MAP_ANONYMOUS)? I
seem to think that's what I did when I needed this functionality oh so
many moons ago.
From the reading I've done on this topic, that seems to be a
Magnus Hagander mag...@hagander.net writes:
On Thu, Jun 28, 2012 at 7:00 AM, Robert Haas robertmh...@gmail.com wrote:
A related question is - if we do this - should we enable it only on
ports where we've verified that it works, or should we just turn it on
everywhere and fix breakage if/when
On Thu, Jun 28, 2012 at 8:57 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 28, 2012 at 9:47 AM, Jon Nelson jnelson+pg...@jamponi.net wrote:
Why not just mmap /dev/zero (MAP_SHARED but not MAP_ANONYMOUS)? I
seem to think that's what I did when I needed this functionality oh so
many
... btw, I rather imagine that Robert has already noticed this, but OS X
(and presumably other BSDen) spells the flag MAP_ANON not
MAP_ANONYMOUS. I also find this rather interesting flag there:
MAP_HASSEMAPHORE Notify the kernel that the region may contain sema-
On Thu, Jun 28, 2012 at 10:11 AM, Tom Lane t...@sss.pgh.pa.us wrote:
... btw, I rather imagine that Robert has already noticed this, but OS X
(and presumably other BSDen) spells the flag MAP_ANON not
MAP_ANONYMOUS. I also find this rather interesting flag there:
MAP_HASSEMAPHORE Notify
On 28 June 2012 16:26, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 28, 2012 at 10:11 AM, Tom Lane t...@sss.pgh.pa.us wrote:
... btw, I rather imagine that Robert has already noticed this, but OS X
(and presumably other BSDen) spells the flag MAP_ANON not
MAP_ANONYMOUS. I also find
On Thu, Jun 28, 2012 at 8:26 AM, Robert Haas robertmh...@gmail.com wrote:
3. Consider adjusting the logic inside initdb. If this works
everywhere, the code for determining how to set shared_buffers should
become pretty much irrelevant. Even if it only works some places, we
could add 64MB or
On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown t...@linux.com wrote:
On 64-bit Linux, if I allocate more shared buffers than the system is
capable of reserving, it doesn't start. This is expected, but there's
no error logged anywhere (actually, nothing logged at all), and the
postmaster.pid
On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown t...@linux.com wrote:
On 64-bit Linux, if I allocate more shared buffers than the system is
capable of reserving, it doesn't start. This is expected, but there's
no error
On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown t...@linux.com wrote:
On 64-bit Linux, if I allocate more shared buffers than the system is
capable of
On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund and...@2ndquadrant.com wrote:
On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
On Thu, Jun 28, 2012 at 7:15 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 28, 2012 at 12:13 PM, Thom Brown t...@linux.com wrote:
On 64-bit
Magnus Hagander mag...@hagander.net writes:
On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund and...@2ndquadrant.com wrote:
On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
What happens if you mlock() it into memory - does that fail quickly?
Is that not something we might want to do
On Thursday, June 28, 2012 07:43:16 PM Tom Lane wrote:
Magnus Hagander mag...@hagander.net writes:
On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund and...@2ndquadrant.com
wrote:
On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
What happens if you mlock() it into memory - does
Andres Freund and...@2ndquadrant.com writes:
On Thursday, June 28, 2012 07:43:16 PM Tom Lane wrote:
I think it *would* be a good idea to mlock if we could. Setting shmem
large enough that it swaps has always been horrible for performance,
and in sysv-land there's no way to prevent that. But
On Thursday, June 28, 2012 08:00:06 PM Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On Thursday, June 28, 2012 07:43:16 PM Tom Lane wrote:
I think it *would* be a good idea to mlock if we could. Setting shmem
large enough that it swaps has always been horrible for
Andres Freund and...@2ndquadrant.com writes:
On Thursday, June 28, 2012 08:00:06 PM Tom Lane wrote:
Well, the permissions angle is actually a good thing here. There is
pretty much no risk of the mlock succeeding on a box that hasn't been
specially configured --- and, in most cases, I think
On Thu, Jun 28, 2012 at 1:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Thu, Jun 28, 2012 at 7:27 PM, Andres Freund and...@2ndquadrant.com
wrote:
On Thursday, June 28, 2012 07:19:46 PM Magnus Hagander wrote:
What happens if you mlock() it into
Robert Haas robertmh...@gmail.com writes:
I tried this. At least on my fairly vanilla MacOS X desktop, an mlock
for a larger amount of memory than was conveniently on hand (4GB, on a
4GB box) neither succeeded nor failed in a timely fashion but instead
progressively hung the machine,
On Thu, Jun 28, 2012 at 2:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I tried this. At least on my fairly vanilla MacOS X desktop, an mlock
for a larger amount of memory than was conveniently on hand (4GB, on a
4GB box) neither succeeded nor failed in a
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
So, here's a patch. Instead of using POSIX shmem, I just took the
expedient of using mmap() to map a block of MAP_SHARED|MAP_ANONYMOUS
memory. The sysv shm is still allocated, but
On Wed, Jun 27, 2012 at 3:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
A.M. age...@themactionfaction.com writes:
On 06/26/2012 07:30 PM, Tom Lane wrote:
I solved this via fcntl locking.
No, you didn't, because fcntl locks aren't inherited by child processes.
Too bad, because they'd be a great
Magnus Hagander mag...@hagander.net writes:
On Wed, Jun 27, 2012 at 3:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I wonder whether this design can be adapted to Windows? IIRC we do
not have a bulletproof data directory lock scheme for Windows.
It seems like this makes few enough demands on the
Robert Haas robertmh...@gmail.com writes:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Would Posix shmem help with that at all? Why did you choose not to
use the Posix API, anyway?
It seemed more complicated. If we use the POSIX API, we've got to
have code to find a
All,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Would Posix shmem help with that at all? Why did you choose not to
use the Posix API, anyway?
It seemed more complicated. If we
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Right, but does it provide honest protection against starting two
postmasters in the same data directory? Or more to the point,
does it prevent starting a new postmaster when the old postmaster
crashed but there are still orphaned backends making
On Wed, Jun 27, 2012 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Wed, Jun 27, 2012 at 3:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I wonder whether this design can be adapted to Windows? IIRC we do
not have a bulletproof data directory lock
Magnus Hagander mag...@hagander.net writes:
On Wed, Jun 27, 2012 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
AFAIR we basically punted on those problems for the Windows port,
for lack of an equivalent to nattch.
No, we spent a lot of time trying to *fix* it, and IIRC we did.
OK, in that
On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Would Posix shmem help with that at all? Why did you choose not to
use the Posix API, anyway?
It seemed more
On Wed, Jun 27, 2012 at 9:52 AM, Stephen Frost sfr...@snowman.net wrote:
What this all boils down to is- can you have a shm segment that goes
away when no one is still attached to it, but actually give it a name
and then detect if it already exists atomically on startup on
Linux/Unixes? If
On Jun 27, 2012, at 7:34 AM, Robert Haas wrote:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
So, here's a patch. Instead of using POSIX shmem, I just took the
expedient of using mmap() to map a block of
On Wed, Jun 27, 2012 at 9:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Jun 27, 2012 at 12:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Would Posix shmem help with that at all? Why did you choose not to
use the Posix API, anyway?
It seemed more
Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked, we had a reasonably acceptable patch to use mostly Posix
Shared mem with a very small sysv ram partition. Is there anything
keeping this from going into 9.3? It would eliminate a major
On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked, we had a reasonably acceptable patch to use mostly Posix
Shared mem with a very small sysv ram partition. Is
On 6/26/12 2:13 PM, Robert Haas wrote:
On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked, we had a reasonably acceptable patch to use mostly Posix
Shared mem
On Tue, Jun 26, 2012 at 2:18 PM, Josh Berkus j...@agliodbs.com wrote:
On 6/26/12 2:13 PM, Robert Haas wrote:
On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked,
On Tue, Jun 26, 2012 at 5:18 PM, Josh Berkus j...@agliodbs.com wrote:
On 6/26/12 2:13 PM, Robert Haas wrote:
On Tue, Jun 26, 2012 at 4:29 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Josh Berkus's message of mar jun 26 15:49:59 -0400 2012:
Robert, all:
Last I checked,
On that, I used to be of the opinion that this is a good compromise (a
small amount of interlock space, plus mostly posix shmem), but I've
heard since then (I think via AgentM indirectly, but I'm not sure)
that there are cases where even the small SysV segment can cause
problems -- notably
On Tue, Jun 26, 2012 at 5:44 PM, Josh Berkus j...@agliodbs.com wrote:
On that, I used to be of the opinion that this is a good compromise (a
small amount of interlock space, plus mostly posix shmem), but I've
heard since then (I think via AgentM indirectly, but I'm not sure)
that there are
Excerpts from Daniel Farina's message of mar jun 26 17:40:16 -0400 2012:
On that, I used to be of the opinion that this is a good compromise (a
small amount of interlock space, plus mostly posix shmem), but I've
heard since then (I think via AgentM indirectly, but I'm not sure)
that there
On Tue, Jun 26, 2012 at 2:53 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Daniel Farina's message of mar jun 26 17:40:16 -0400 2012:
On that, I used to be of the opinion that this is a good compromise (a
small amount of interlock space, plus mostly posix shmem), but I've
On Jun 26, 2012, at 5:44 PM, Josh Berkus wrote:
On that, I used to be of the opinion that this is a good compromise (a
small amount of interlock space, plus mostly posix shmem), but I've
heard since then (I think via AgentM indirectly, but I'm not sure)
that there are cases where even the
This can be trivially reproduced if one runs an old (SysV shared
memory-based) postgresql alongside a potentially newer postgresql with a
smaller SysV segment. This can occur with applications that bundle postgresql
as part of the app.
I'm not saying it doesn't happen at all. I'm saying
Robert Haas robertmh...@gmail.com writes:
So, what about keeping a FIFO in the data directory?
Hm, does that work if the data directory is on NFS? Or some other weird
not-really-Unix file system?
When the
postmaster starts up, it tries to open the file with O_NONBLOCK |
O_WRONLY (or
On Jun 26, 2012, at 6:12 PM, Daniel Farina wrote:
(Emphasis mine).
I don't think that -hackers at the time gave the zero-shmem rationale
much weight (I also was not that happy about the safety mechanism of
that patch), but upon more reflection (and taking into account *other*
software
Josh Berkus j...@agliodbs.com writes:
So let's fix the 80% case with something we feel confident in, and then
revisit the no-sysv interlock as a separate patch. That way if we can't
fix the interlock issues, we still have a reduced-shmem version of Postgres.
Yes. Insisting that we have the
Tom Lane t...@sss.pgh.pa.us wrote:
In the meantime, insisting that we solve this problem before we do
anything is a good recipe for ensuring that nothing happens, just
like it hasn't happened for the last half dozen years. (I see
Alvaro just made the same point.)
And now so has Josh.
+1
A.M. age...@themactionfaction.com writes:
This can be trivially reproduced if one runs an old (SysV shared
memory-based) postgresql alongside a potentially newer postgresql with a
smaller SysV segment. This can occur with applications that bundle postgresql
as part of the app.
I don't
Excerpts from Tom Lane's message of mar jun 26 18:58:45 -0400 2012:
Even if you actively try to configure the shmem settings to exactly
fill shmmax (which I concede some installation scripts might do),
it's going to be hard to do because of the 8K granularity of the main
knob,
A.M. age...@themactionfaction.com writes:
On Jun 26, 2012, at 6:12 PM, Daniel Farina wrote:
I'm simply suggesting that for additional benefits it may be worth
thinking about getting around nattach and thus SysV shmem, especially
with regard to safety, in an open-ended way.
I solved this via
On 06/26/2012 07:30 PM, Tom Lane wrote:
A.M. age...@themactionfaction.com writes:
On Jun 26, 2012, at 6:12 PM, Daniel Farina wrote:
I'm simply suggesting that for additional benefits it may be worth
thinking about getting around nattach and thus SysV shmem, especially
with regard to safety, in
On 06/26/2012 07:15 PM, Alvaro Herrera wrote:
Excerpts from Tom Lane's message of mar jun 26 18:58:45 -0400 2012:
Even if you actively try to configure the shmem settings to exactly
fill shmmax (which I concede some installation scripts might do),
it's going to be hard to do because of the 8K
On Tue, Jun 26, 2012 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
So, what about keeping a FIFO in the data directory?
Hm, does that work if the data directory is on NFS? Or some other weird
not-really-Unix file system?
I would expect NFS to work
A.M. age...@themactionfaction.com writes:
On 06/26/2012 07:30 PM, Tom Lane wrote:
I solved this via fcntl locking.
No, you didn't, because fcntl locks aren't inherited by child processes.
Too bad, because they'd be a great solution otherwise.
You claimed this last time and I replied:
I wrote:
Reflecting on this further, it seems to me that the main remaining
failure modes are (1) file locking doesn't work, or (2) idiot DBA
manually removes the lock file.
Oh, wait, I just remembered the really fatal problem here: to quote from
the SUS fcntl spec,
All locks
On Tue, Jun 26, 2012 at 6:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Josh Berkus j...@agliodbs.com writes:
So let's fix the 80% case with something we feel confident in, and then
revisit the no-sysv interlock as a separate patch. That way if we can't
fix the interlock issues, we still have a
Robert Haas robertmh...@gmail.com writes:
So, here's a patch. Instead of using POSIX shmem, I just took the
expedient of using mmap() to map a block of MAP_SHARED|MAP_ANONYMOUS
memory. The sysv shm is still allocated, but it's just a copy of
PGShmemHeader; the real shared memory is the
80 matches
Mail list logo