On Tue, Nov 15, 2016 at 5:31 PM, Robert Haas wrote:
> I think we should develop versions of this that (1) allocate from the
> main shared memory segment and (2) allocate from backend-private
> memory. Per my previous benchmarking results, allocating from
> backend-private
On Fri, Dec 2, 2016 at 3:46 PM, Thomas Munro
wrote:
> Here's a patch to provide the right format string for dsa_pointer to
> printf-like functions, which clears a warning coming from dsa_dump (a
> debugging function) on 32 bit systems.
Committed.
--
Robert Haas
On Sat, Dec 3, 2016 at 9:02 AM, Robert Haas wrote:
> On Fri, Dec 2, 2016 at 2:56 PM, Robert Haas wrote:
>> On Fri, Dec 2, 2016 at 1:21 PM, Robert Haas wrote:
>>> On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
>>>
On Fri, Dec 2, 2016 at 2:56 PM, Robert Haas wrote:
> On Fri, Dec 2, 2016 at 1:21 PM, Robert Haas wrote:
>> On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
>> wrote:
>>> Please find attached dsa-v8.patch, and also a small
On Fri, Dec 2, 2016 at 1:21 PM, Robert Haas wrote:
> On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
> wrote:
>> Please find attached dsa-v8.patch, and also a small test module for
>> running random allocate/free exercises and dumping the
On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
wrote:
> Please find attached dsa-v8.patch, and also a small test module for
> running random allocate/free exercises and dumping the internal
> allocator state.
OK, I've committed the main patch. As far as
On Thu, Dec 1, 2016 at 10:33 PM, Thomas Munro wrote:
>
> Please find attached dsa-v8.patch, and also a small test module for
> running random allocate/free exercises and dumping the internal
> allocator state.
Moved to next CF with "needs review" status.
More review:
+ * For large objects, we just stick all of the allocations in fullness class
+ * 0. Since we can just return the space directly to the free page manager,
+ * we don't really need them on a list at all, except that if someone wants
+ * to bulk release everything allocated using this
On Wed, Nov 23, 2016 at 7:07 AM, Thomas Munro
wrote:
> Those let you create an area in existing memory (in a DSM segment,
> traditional inherited shmem). The in-place versions will stlll create
> DSM segments on demand as required, though I suppose if you wanted to
On Thu, Nov 10, 2016 at 12:37 AM, Thomas Munro
wrote:
> On Tue, Nov 1, 2016 at 5:06 PM, Thomas Munro
> wrote:
>> On Wed, Oct 5, 2016 at 11:28 PM, Thomas Munro
>> wrote:
>>> [dsa-v3.patch]
>>
>> Here is
On Wed, Oct 5, 2016 at 3:00 AM, Thomas Munro
wrote:
> Here's a new version that does that.
While testing this patch I found some issue,
+ total_size = DSA_INITIAL_SEGMENT_SIZE;
+ total_pages = total_size / FPM_PAGE_SIZE;
+ metadata_bytes =
+
On Tue, Apr 15, 2014 at 10:46 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Wed, Apr 16, 2014 at 3:01 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Apr 15, 2014 at 12:33 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas
On Tue, Apr 15, 2014 at 12:33 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Apr 12, 2014 at 1:32 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I have checked that other place in code also check handle to
On Wed, Apr 16, 2014 at 3:01 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Apr 15, 2014 at 12:33 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas robertmh...@gmail.com wrote:
For the create case, I'm wondering if we should put the block that
On Sat, Apr 12, 2014 at 1:32 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Wed, Apr 9, 2014 at 9:20 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Apr 9, 2014 at 7:41 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I am just not sure whether it is okay to rearrange the code and call
On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Apr 12, 2014 at 1:32 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I have checked that other place in code also check handle to
decide if API has failed. Refer function PGSharedMemoryIsInUse().
So I think fix
On Wed, Apr 9, 2014 at 9:20 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Apr 9, 2014 at 7:41 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I am just not sure whether it is okay to rearrange the code and call
GetLastError() only if returned handle is Invalid (NULL) or try to look
for
On Tue, Apr 8, 2014 at 9:15 PM, Robert Haas robertmh...@gmail.com wrote:
Apparently not. However, I'm fairly sure this is a step toward
addressing the complaints previously raised, even if there may be some
details people still want changed, so I've gone ahead and committed
it.
Few
On Wed, Apr 9, 2014 at 7:41 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Few Observations:
1. One new warning has been introduced in code.
1src\backend\port\win32_shmem.c(295): warning C4013:
'dsm_set_control_handle' undefined; assuming extern returning int
Attached patch fixes this
On 2014-04-09 11:50:33 -0400, Robert Haas wrote:
One question:
1. I have seen that initdb still creates pg_dynshmem, is it required
after your latest changes?
It's only used now if dynamic_shared_memory_type = mmap. I know
Andres was never a huge fan of the mmap implementation, so we
On Fri, Apr 4, 2014 at 10:01 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jan 22, 2014 at 10:17 AM, Noah Misch n...@leadboat.com wrote:
Yeah, abandoning the state file is looking attractive.
Here's a draft patch getting rid of the state file. This should
address concerns raised by
On Wed, Jan 22, 2014 at 10:17 AM, Noah Misch n...@leadboat.com wrote:
Yeah, abandoning the state file is looking attractive.
Here's a draft patch getting rid of the state file. This should
address concerns raised by Heikki and Fujii Masao and echoed by Tom
that dynamic shared memory behaves
On Mon, Feb 10, 2014 at 7:17 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
Does it make another problem if dsm_detach() also releases lwlocks
being allocated on the dsm segment to be released?
Lwlocks being held is tracked in the held_lwlocks[] array; its length is
usually 100. In case when
2014-02-08 4:52 GMT+09:00 Robert Haas robertmh...@gmail.com:
On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas robertmh...@gmail.com wrote:
One idea I just had is to improve the dsm_toc module so that it can
optionally set up a tranche of lwlocks for you, and provide some
analogues of
On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas robertmh...@gmail.com wrote:
One idea I just had is to improve the dsm_toc module so that it can
optionally set up a tranche of lwlocks for you, and provide some
analogues of RequestAddinLWLocks and LWLockAssign for that case. That
would probably
On Thu, Jan 23, 2014 at 11:10 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
way to initialize it.
How about to have a following function?
void
InitLWLock(LWLock *lock)
{
On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
way to initialize it.
There's LWLockInitialize()
2014/1/23 Andres Freund and...@2ndquadrant.com:
On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid;
On Tue, Jan 21, 2014 at 2:58 PM, Noah Misch n...@leadboat.com wrote:
What do people prefer?
I recommend performing cleanup on the control segment named in PGShmemHeader
just before shmdt() in PGSharedMemoryCreate(). No new ERROR or WARNING sites
are necessary. Have dsm_postmaster_startup()
On Wed, Jan 22, 2014 at 09:32:09AM -0500, Robert Haas wrote:
On Tue, Jan 21, 2014 at 2:58 PM, Noah Misch n...@leadboat.com wrote:
What do people prefer?
I recommend performing cleanup on the control segment named in PGShmemHeader
just before shmdt() in PGSharedMemoryCreate(). No new
On 2014-01-10 13:11:32 -0500, Robert Haas wrote:
OK, I've implemented this: here's what I believe to be a complete
patch, based on the previous lwlock-pointers.patch but now handling
LOCK_DEBUG and TRACE_LWLOCKS and dtrace and a bunch of other loose
ends. I think this should be adequate for
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
breaking external code using lwlocks?
+1, in fact there's probably no reason to touch most *internal* code using
that type name either.
regards, tom
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
breaking external code using lwlocks?
+1, in fact there's probably no reason to touch most *internal* code
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
breaking external code using lwlocks?
+1, in fact
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei kai...@ak.jp.nec.com wrote:
I briefly checked the patch. Most of lines are mechanical replacement
from LWLockId to LWLock *, and compiler didn't claim anything with
-Wall -Werror option.
My concern is around LWLockTranche mechanism. Isn't it too
On Wed, Dec 18, 2013 at 12:21:08PM -0500, Robert Haas wrote:
On Tue, Dec 10, 2013 at 6:26 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The larger point is that such a shutdown process has never in the history
of Postgres been successful at removing shared-memory (or semaphore)
resources. I do
(2014/01/22 1:37), Robert Haas wrote:
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei kai...@ak.jp.nec.com wrote:
I briefly checked the patch. Most of lines are mechanical replacement
from LWLockId to LWLock *, and compiler didn't claim anything with
-Wall -Werror option.
My concern is around
(2014/01/11 3:11), Robert Haas wrote:
On Mon, Jan 6, 2014 at 5:50 PM, Robert Haas robertmh...@gmail.com wrote:
This is only part of the solution, of course: a complete solution will
involve making the hash table key something other than the lock ID.
What I'm thinking we can do is making the
On 2014-01-06 21:35:22 -0300, Alvaro Herrera wrote:
Jim Nasby escribió:
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The point I'm making is that no such code should get past review,
whether it's got an obvious performance
On Tue, Jan 7, 2014 at 6:54 AM, Andres Freund and...@2ndquadrant.com wrote:
Maybe it makes sense to have such a check #ifdef'ed out on most builds
to avoid extra overhead, but not having any check at all just because we
trust the review process too much doesn't strike me as the best of
ideas.
On 01/05/2014 07:56 PM, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
someone calls SpinLockInit(), it's going to allocate a
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
On 01/05/2014 07:56 PM, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
* Robert Haas (robertmh...@gmail.com) wrote:
Another idea is to include some identifying information in the lwlock.
That was my immediate reaction to this issue...
For example, each lwlock could have a char *name in it, and we could
print the name. In theory, this could be a big step
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
That assumes that you never hold more than one spinlock at a time, otherwise
you can get deadlocks. I think that assumptions holds currently, because
acquiring two spinlocks at a time would be
Robert Haas robertmh...@gmail.com writes:
I guess the question boils down to: why are we keeping
--disable-spinlocks around? If we're expecting that people might
really use it for serious work, then it needs to remain and it needs
to work with dynamic shared memory. If we're expecting that
On 2014-01-06 09:59:49 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
That assumes that you never hold more than one spinlock at a time,
otherwise
you can get deadlocks. I think that assumptions holds currently,
On Mon, Jan 6, 2014 at 9:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
That assumes that you never hold more than one spinlock at a time, otherwise
you can get deadlocks. I think that assumptions
On Mon, Jan 6, 2014 at 11:22 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I think we can eliminate the first of those. Semaphores for spinlocks
were a performance disaster fifteen years ago, and the situation has
surely only gotten worse since then. I do, however, believe that
--disable-spinlocks
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
I seem to recall that there was some good reason for keeping all the
LWLocks in an array, back when the facility was first designed.
I'm too lazy to research the point right now, but you might want to
Robert Haas robertmh...@gmail.com writes:
Well, I took a look at this and it turns out not to be very hard, so
here's a patch. Currently, we allocate 3 semaphore per shared buffer
and a bunch of others, but the 3 per shared buffer dominates, so you
end up with ~49k spinlocks for the default
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Well, I took a look at this and it turns out not to be very hard, so
here's a patch. Currently, we allocate 3 semaphore per shared buffer
and a bunch of others, but the 3 per shared
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
OTOH, the LWLock mechanism has been stable for long enough now that
we can probably suppose this struct is no more subject to churn than
any other widely-known one, so maybe that consideration is no longer
significant.
On the
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
-1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
have anything whatsoever to do with enforcing the actual coding rule).
Hmm. I thought that was a pretty
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
OTOH, the LWLock mechanism has been stable for long enough now that
we can probably suppose this struct is no more subject to churn than
any other widely-known one, so maybe that
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
-1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
have anything whatsoever to do with
On Mon, Jan 6, 2014 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
OTOH, the LWLock mechanism has been stable for long enough now that
we can probably suppose this struct is no more
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to enforce proper use of spinlocks; but this change doesn't
seem to me to move the ball downfield by any
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to enforce proper use of spinlocks; but
On Mon, Jan 6, 2014 at 9:48 AM, Stephen Frost sfr...@snowman.net wrote:
None of these ideas are a complete solution for LWLOCK_STATS. In the
other three cases noted above, we only need an identifier for the lock
instantaneously, so that we can pass it off to the logger or dtrace
or whatever.
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to
Jim Nasby escribió:
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The point I'm making is that no such code should get past review,
whether it's got an obvious performance problem or not.
Sure, I agree, but we all make mistakes.
One of the things that you might want to do with dynamic shared memory
is store a lock in it. In fact, my bet is that almost everything that
uses dynamic shared memory will want to do precisely that, because, of
course, it's dynamic *shared* memory, which means that it is
concurrently accessed by
On 2014-01-05 12:56:05 -0500, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
someone calls SpinLockInit(), it's going to
Robert Haas robertmh...@gmail.com writes:
For what it's worth, my vote is currently for #2. I can't think of
many interesting to do with dynamic shared memory without having at
least spinlocks, so I don't think we'd be losing much. #1 seems
needlessly unfriendly, #3 seems like a lot of work
On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
For what it's worth, my vote is currently for #2. I can't think of
many interesting to do with dynamic shared memory without having at
least spinlocks, so I don't think we'd be losing much. #1 seems
On Sun, Jan 5, 2014 at 2:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I seem to recall that there was some good reason for keeping all the
LWLocks in an array, back when the facility was first designed.
I'm too lazy to research the point right now, but you might want to
go back and look at the
On Tue, Dec 10, 2013 at 6:26 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Noah Misch n...@leadboat.com writes:
On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
Let's not add more cases like that, if we can avoid it.
Only if we can avoid it for a modicum of effort and feature
On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
On 11/20/2013 09:58 PM, Robert Haas wrote:
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
* As discussed in the Something fishy happening on frogmouth thread, I
don't like the fact that
On 12/10/2013 07:27 PM, Noah Misch wrote:
On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
On 11/20/2013 09:58 PM, Robert Haas wrote:
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
* As discussed in the Something fishy happening on
On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
On 12/10/2013 07:27 PM, Noah Misch wrote:
On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
* As discussed in the Something
On 2013-12-10 18:12:53 -0500, Noah Misch wrote:
On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
On 12/10/2013 07:27 PM, Noah Misch wrote:
On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
Let's not add more cases like that, if we can avoid it.
Only
Noah Misch n...@leadboat.com writes:
On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
Let's not add more cases like that, if we can avoid it.
Only if we can avoid it for a modicum of effort and feature compromise.
You're asking for PostgreSQL to reshape its use of
On Thu, Dec 5, 2013 at 4:06 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That's a very interesting idea. I've been thinking that we needed to
preserve the property that new workers could attach to the shared
memory segment at any time, but that might not be necessary in all
case. We
On 11/20/2013 09:58 PM, Robert Haas wrote:
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
How many allocations? What size will they have have typically, minimum and
maximum?
The facility is intended to be general, so the answer could vary
widely by
On Thu, Dec 5, 2013 at 11:12 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Hmm. Those two use cases are quite different. For message-passing, you want
a lot of small queues, but for parallel sort, you want one huge allocation.
I wonder if we shouldn't even try a one-size-fits-all
On 12/05/2013 09:34 PM, Robert Haas wrote:
On Thu, Dec 5, 2013 at 11:12 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
One idea is to create the shared memory object with shm_open, and wait until
all the worker processes that need it have attached to it. Then,
shm_unlink() it, before
On 20/11/13 19:58, Robert Haas wrote:
Parallel sort, and then parallel other stuff. Eventually general
parallel query.
I have recently updated https://wiki.postgresql.org/wiki/Parallel_Sort
and you may find that interesting/helpful as a statement of intent.
I've been playing with an internal
On Sat, Nov 23, 2013 at 4:21 PM, Jeremy Harris j...@wizmail.org wrote:
Its performance shines on partially- or reverse-sorted input.
Search the archives for the work I did on timsort support a while
back. A patch was posted, that had some impressive results provided
you just considered the
I'm trying to catch up on all of this dynamic shared memory stuff. A
bunch of random questions and complaints:
What kind of usage are we trying to cater with the dynamic shared
memory? How many allocations? What size will they have have typically,
minimum and maximum? I looked at the message
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I'm trying to catch up on all of this dynamic shared memory stuff. A bunch
of random questions and complaints:
What kind of usage are we trying to cater with the dynamic shared memory?
Parallel sort, and then
On Sun, Oct 13, 2013 at 3:07 AM, Amit Kapila amit.kapil...@gmail.com wrote:
1. Do you think we should add information about pg_dynshmem file at link:
http://www.postgresql.org/docs/devel/static/storage-file-layout.html
It contains information about all files/folders in data directory
2.
+/*
On Mon, Oct 14, 2013 at 5:11 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Oct 13, 2013 at 3:07 AM, Amit Kapila amit.kapil...@gmail.com wrote:
1. Do you think we should add information about pg_dynshmem file at link:
http://www.postgresql.org/docs/devel/static/storage-file-layout.html
On Mon, Oct 14, 2013 at 11:11 AM, Amit Kapila amit.kapil...@gmail.com wrote:
During test, I found one issue in Windows implementation.
During startup, when it tries to create new control segment for
dynamic shared memory, it loops until an unused identifier is found,
but for Windows
On Wed, Oct 9, 2013 at 1:10 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Sep 26, 2013 at 9:27 AM, Noah Misch n...@leadboat.com wrote:
There's no data corruption problem if we proceed - but there likely
has been one leading to the current state.
+1 for making this one a PANIC,
Since, as has been previously discussed in this forum on multiple
occasions [citation needed], the default System V shared memory limits
are absurdly low on many systems, the dynamic shared memory patch
defaults to POSIX shared memory, which has often been touted as a
superior alternative
On Thu, Oct 10, 2013 at 1:13 PM, Robert Haas robertmh...@gmail.com wrote:
(1) Define the issue as not our problem. IOW, as of now, if you
want to use PostgreSQL, you've got to either make POSIX shared memory
work on your machine, or change the GUC that selects the type of
dynamic shared
On 10/10/2013 12:13 PM, Robert Haas wrote:
Since, as has been previously discussed in this forum on multiple
occasions [citation needed], the default System V shared memory limits
are absurdly low on many systems, the dynamic shared memory patch
defaults to POSIX shared memory, which has often
On Thu, Oct 10, 2013 at 2:21 PM, Andrew Dunstan and...@dunslane.net wrote:
Other votes? Other ideas?
5) test and set it in initdb.
Are you advocating for that option, or just calling out that it's
possible? I'd say that's closely related to option #3, except at
initdb time rather than
On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas robertmh...@gmail.com wrote:
(2) Default to using System V shared memory. If people want POSIX
shared memory, let them change the default.
After some consideration, I think my vote is for option #2.
Wouldn't that become the call of packagers?
On Thu, Oct 10, 2013 at 2:36 PM, Peter Geoghegan p...@heroku.com wrote:
On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas robertmh...@gmail.com wrote:
(2) Default to using System V shared memory. If people want POSIX
shared memory, let them change the default.
After some consideration, I think my
On 10/10/2013 02:35 PM, Robert Haas wrote:
On Thu, Oct 10, 2013 at 2:21 PM, Andrew Dunstan and...@dunslane.net wrote:
Other votes? Other ideas?
5) test and set it in initdb.
Are you advocating for that option, or just calling out that it's
possible? I'd say that's closely related to option
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Oct 10, 2013 at 2:36 PM, Peter Geoghegan p...@heroku.com wrote:
On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas robertmh...@gmail.com wrote:
(2) Default to using System V shared memory. If people want POSIX
shared memory, let them change
On Thu, Oct 10, 2013 at 11:13 AM, Robert Haas robertmh...@gmail.com wrote:
Since, as has been previously discussed in this forum on multiple
occasions [citation needed], the default System V shared memory limits
are absurdly low on many systems, the dynamic shared memory patch
defaults to
On 10/10/2013 02:45 PM, Robert Haas wrote:
On Thu, Oct 10, 2013 at 2:36 PM, Peter Geoghegan p...@heroku.com wrote:
On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas robertmh...@gmail.com wrote:
(2) Default to using System V shared memory. If people want POSIX
shared memory, let them change the
On Thu, Oct 10, 2013 at 12:13:20PM -0400, Robert Haas wrote:
Since, as has been previously discussed in this forum on multiple
occasions [citation needed], the default System V shared memory limits
are absurdly low on many systems, the dynamic shared memory patch
defaults to POSIX shared
On Thu, Oct 10, 2013 at 4:00 PM, Merlin Moncure mmonc...@gmail.com wrote:
(2) Default to using System V shared memory. If people want POSIX
shared memory, let them change the default.
Doesn't #2 negate all advantages of this effort? Bringing sysv
management back on the table seems like a
Robert,
Doesn't #2 negate all advantages of this effort? Bringing sysv
management back on the table seems like a giant step backwards -- or
am I missing something?
Not unless there's no difference between the default and the only option.
Well, per our earlier discussion about the first 15
On 2013-10-10 12:13:20 -0400, Robert Haas wrote:
and on smew (Debian GNU/Linux 6.0), it
fails with Function not implemented, which according to a forum
post[1] I found probably indicates that /dev/shm doesn't mount a tmpfs
on that box.
It would be nice to get confirmed what the reason for
1 - 100 of 137 matches
Mail list logo