On Tue, Nov 15, 2016 at 5:31 PM, Robert Haas wrote:
> I think we should develop versions of this that (1) allocate from the
> main shared memory segment and (2) allocate from backend-private
> memory. Per my previous benchmarking results, allocating from
> backend-private memory would be a substa
On Fri, Dec 2, 2016 at 3:46 PM, Thomas Munro
wrote:
> Here's a patch to provide the right format string for dsa_pointer to
> printf-like functions, which clears a warning coming from dsa_dump (a
> debugging function) on 32 bit systems.
Committed.
--
Robert Haas
EnterpriseDB: http://www.enterpri
On Sat, Dec 3, 2016 at 9:02 AM, Robert Haas wrote:
> On Fri, Dec 2, 2016 at 2:56 PM, Robert Haas wrote:
>> On Fri, Dec 2, 2016 at 1:21 PM, Robert Haas wrote:
>>> On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
>>> wrote:
Please find attached dsa-v8.patch, and also a small test module for
On Fri, Dec 2, 2016 at 2:56 PM, Robert Haas wrote:
> On Fri, Dec 2, 2016 at 1:21 PM, Robert Haas wrote:
>> On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
>> wrote:
>>> Please find attached dsa-v8.patch, and also a small test module for
>>> running random allocate/free exercises and dumping the int
On Fri, Dec 2, 2016 at 1:21 PM, Robert Haas wrote:
> On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
> wrote:
>> Please find attached dsa-v8.patch, and also a small test module for
>> running random allocate/free exercises and dumping the internal
>> allocator state.
>
> OK, I've committed the main
On Thu, Dec 1, 2016 at 6:33 AM, Thomas Munro
wrote:
> Please find attached dsa-v8.patch, and also a small test module for
> running random allocate/free exercises and dumping the internal
> allocator state.
OK, I've committed the main patch. As far as test-dsa.patch, can we
tie that into make ch
On Thu, Dec 1, 2016 at 10:33 PM, Thomas Munro wrote:
>
> Please find attached dsa-v8.patch, and also a small test module for
> running random allocate/free exercises and dumping the internal
> allocator state.
Moved to next CF with "needs review" status.
Regards,
Hari Babu
Fujitsu Australia
More review:
+ * For large objects, we just stick all of the allocations in fullness class
+ * 0. Since we can just return the space directly to the free page manager,
+ * we don't really need them on a list at all, except that if someone wants
+ * to bulk release everything allocated using this B
On Wed, Nov 23, 2016 at 7:07 AM, Thomas Munro
wrote:
> Those let you create an area in existing memory (in a DSM segment,
> traditional inherited shmem). The in-place versions will stlll create
> DSM segments on demand as required, though I suppose if you wanted to
> prevent that you could with d
On Thu, Nov 10, 2016 at 12:37 AM, Thomas Munro
wrote:
> On Tue, Nov 1, 2016 at 5:06 PM, Thomas Munro
> wrote:
>> On Wed, Oct 5, 2016 at 11:28 PM, Thomas Munro
>> wrote:
>>> [dsa-v3.patch]
>>
>> Here is a new version which just adds CLOBBER_FREED_MEMORY support to
>> dsa_free.
>
> Here is a new
On Wed, Oct 5, 2016 at 3:00 AM, Thomas Munro
wrote:
> Here's a new version that does that.
While testing this patch I found some issue,
+ total_size = DSA_INITIAL_SEGMENT_SIZE;
+ total_pages = total_size / FPM_PAGE_SIZE;
+ metadata_bytes =
+ MAXALIGN(sizeof(dsa_area_control)) +
+ MAXALIGN(sizeof
On Tue, Apr 15, 2014 at 10:46 PM, Amit Kapila wrote:
> On Wed, Apr 16, 2014 at 3:01 AM, Robert Haas wrote:
>> On Tue, Apr 15, 2014 at 12:33 AM, Amit Kapila
>> wrote:
>>> On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas wrote:
For the create case, I'm wondering if we should put the block that
On Wed, Apr 16, 2014 at 3:01 AM, Robert Haas wrote:
> On Tue, Apr 15, 2014 at 12:33 AM, Amit Kapila wrote:
>> On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas wrote:
>>> For the create case, I'm wondering if we should put the block that
>>> tests for !hmap *before* the _dosmaperr() and check for EE
On Tue, Apr 15, 2014 at 12:33 AM, Amit Kapila wrote:
> On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas wrote:
>> On Sat, Apr 12, 2014 at 1:32 AM, Amit Kapila wrote:
>>> I have checked that other place in code also check handle to
>>> decide if API has failed. Refer function PGSharedMemoryIsInUse(
On Mon, Apr 14, 2014 at 10:03 PM, Robert Haas wrote:
> On Sat, Apr 12, 2014 at 1:32 AM, Amit Kapila wrote:
>> I have checked that other place in code also check handle to
>> decide if API has failed. Refer function PGSharedMemoryIsInUse().
>> So I think fix to call GetLastError() after checking
On Sat, Apr 12, 2014 at 1:32 AM, Amit Kapila wrote:
> On Wed, Apr 9, 2014 at 9:20 PM, Robert Haas wrote:
>> On Wed, Apr 9, 2014 at 7:41 AM, Amit Kapila wrote:
>>> I am just not sure whether it is okay to rearrange the code and call
>>> GetLastError() only if returned handle is Invalid (NULL) or
On Wed, Apr 9, 2014 at 9:20 PM, Robert Haas wrote:
> On Wed, Apr 9, 2014 at 7:41 AM, Amit Kapila wrote:
>> I am just not sure whether it is okay to rearrange the code and call
>> GetLastError() only if returned handle is Invalid (NULL) or try to look
>> for more info.
>
> Well, I don't know eithe
On 2014-04-09 11:50:33 -0400, Robert Haas wrote:
> > One question:
> > 1. I have seen that initdb still creates pg_dynshmem, is it required
> > after your latest changes?
>
> It's only used now if dynamic_shared_memory_type = mmap. I know
> Andres was never a huge fan of the mmap implementation,
On Wed, Apr 9, 2014 at 7:41 AM, Amit Kapila wrote:
> Few Observations:
>
> 1. One new warning has been introduced in code.
> 1>src\backend\port\win32_shmem.c(295): warning C4013:
> 'dsm_set_control_handle' undefined; assuming extern returning int
> Attached patch fixes this warning.
OK, committed
On Tue, Apr 8, 2014 at 9:15 PM, Robert Haas wrote:
> Apparently not. However, I'm fairly sure this is a step toward
> addressing the complaints previously raised, even if there may be some
> details people still want changed, so I've gone ahead and committed
> it.
Few Observations:
1. One new w
On Fri, Apr 4, 2014 at 10:01 AM, Robert Haas wrote:
> On Wed, Jan 22, 2014 at 10:17 AM, Noah Misch wrote:
>> Yeah, abandoning the state file is looking attractive.
>
> Here's a draft patch getting rid of the state file. This should
> address concerns raised by Heikki and Fujii Masao and echoed b
On Wed, Jan 22, 2014 at 10:17 AM, Noah Misch wrote:
> Yeah, abandoning the state file is looking attractive.
Here's a draft patch getting rid of the state file. This should
address concerns raised by Heikki and Fujii Masao and echoed by Tom
that dynamic shared memory behaves differently than the
On Mon, Feb 10, 2014 at 7:17 PM, Kohei KaiGai wrote:
> Does it make another problem if dsm_detach() also releases lwlocks
> being allocated on the dsm segment to be released?
> Lwlocks being held is tracked in the held_lwlocks[] array; its length is
> usually 100. In case when dsm_detach() is call
2014-02-08 4:52 GMT+09:00 Robert Haas :
> On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas wrote:
>> One idea I just had is to improve the dsm_toc module so that it can
>> optionally set up a tranche of lwlocks for you, and provide some
>> analogues of RequestAddinLWLocks and LWLockAssign for that ca
On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas wrote:
> One idea I just had is to improve the dsm_toc module so that it can
> optionally set up a tranche of lwlocks for you, and provide some
> analogues of RequestAddinLWLocks and LWLockAssign for that case. That
> would probably make this quite a
On Thu, Jan 23, 2014 at 11:10 AM, Robert Haas wrote:
> On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund
> wrote:
>> On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
>>> On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
>>> > Andres Freund writes:
>>> >> Shouldn't we introduce a typedef LWLock*
On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund wrote:
> On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
>> On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
>> > Andres Freund writes:
>> >> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
>> >> breaking external code us
2014/1/23 Andres Freund :
> On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
>> Isn't it necessary to have an interface to initialize LWLock structure being
>> allocated on a dynamic shared memory segment?
>> Even though LWLock structure is exposed at lwlock.h, we have no common
>> way to initiali
On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
> Isn't it necessary to have an interface to initialize LWLock structure being
> allocated on a dynamic shared memory segment?
> Even though LWLock structure is exposed at lwlock.h, we have no common
> way to initialize it.
There's LWLockInitialize
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
way to initialize it.
How about to have a following function?
void
InitLWLock(LWLock *lock)
{
SpinL
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
> On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
> > Andres Freund writes:
> >> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
> >> breaking external code using lwlocks?
> >
> > +1, in fact there's probably no reason to
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
> Andres Freund writes:
>> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
>> breaking external code using lwlocks?
>
> +1, in fact there's probably no reason to touch most *internal* code using
> that type name either.
I
Andres Freund writes:
> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
> breaking external code using lwlocks?
+1, in fact there's probably no reason to touch most *internal* code using
that type name either.
regards, tom lane
--
Sent via pgsq
On 2014-01-10 13:11:32 -0500, Robert Haas wrote:
> OK, I've implemented this: here's what I believe to be a complete
> patch, based on the previous lwlock-pointers.patch but now handling
> LOCK_DEBUG and TRACE_LWLOCKS and dtrace and a bunch of other loose
> ends. I think this should be adequate fo
On Wed, Jan 22, 2014 at 09:32:09AM -0500, Robert Haas wrote:
> On Tue, Jan 21, 2014 at 2:58 PM, Noah Misch wrote:
> >> What do people prefer?
> >
> > I recommend performing cleanup on the control segment named in PGShmemHeader
> > just before shmdt() in PGSharedMemoryCreate(). No new ERROR or WAR
On Tue, Jan 21, 2014 at 2:58 PM, Noah Misch wrote:
>> What do people prefer?
>
> I recommend performing cleanup on the control segment named in PGShmemHeader
> just before shmdt() in PGSharedMemoryCreate(). No new ERROR or WARNING sites
> are necessary. Have dsm_postmaster_startup() continue to
(2014/01/22 1:37), Robert Haas wrote:
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei wrote:
I briefly checked the patch. Most of lines are mechanical replacement
from LWLockId to LWLock *, and compiler didn't claim anything with
-Wall -Werror option.
My concern is around LWLockTranche mechanis
On Wed, Dec 18, 2013 at 12:21:08PM -0500, Robert Haas wrote:
> On Tue, Dec 10, 2013 at 6:26 PM, Tom Lane wrote:
> > The larger point is that such a shutdown process has never in the history
> > of Postgres been successful at removing shared-memory (or semaphore)
> > resources. I do not feel a nee
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei wrote:
> I briefly checked the patch. Most of lines are mechanical replacement
> from LWLockId to LWLock *, and compiler didn't claim anything with
> -Wall -Werror option.
>
> My concern is around LWLockTranche mechanism. Isn't it too complicated
> to
(2014/01/11 3:11), Robert Haas wrote:
On Mon, Jan 6, 2014 at 5:50 PM, Robert Haas wrote:
This is only part of the solution, of course: a complete solution will
involve making the hash table key something other than the lock ID.
What I'm thinking we can do is making the lock ID consist of two
un
On Tue, Jan 7, 2014 at 6:54 AM, Andres Freund wrote:
>> Maybe it makes sense to have such a check #ifdef'ed out on most builds
>> to avoid extra overhead, but not having any check at all just because we
>> trust the review process too much doesn't strike me as the best of
>> ideas.
>
> I don't thi
On 2014-01-06 21:35:22 -0300, Alvaro Herrera wrote:
> Jim Nasby escribió:
> > On 1/6/14, 2:59 PM, Robert Haas wrote:
> > >On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
>
> > >>The point I'm making is that no such code should get past review,
> > >>whether it's got an obvious performance problem
Jim Nasby escribió:
> On 1/6/14, 2:59 PM, Robert Haas wrote:
> >On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
> >>The point I'm making is that no such code should get past review,
> >>whether it's got an obvious performance problem or not.
> >
> >Sure, I agree, but we all make mistakes. It's j
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
Robert Haas writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to enforce proper use of spinlocks; but this change doesn't
On Mon, Jan 6, 2014 at 9:48 AM, Stephen Frost wrote:
>> None of these ideas are a complete solution for LWLOCK_STATS. In the
>> other three cases noted above, we only need an identifier for the lock
>> "instantaneously", so that we can pass it off to the logger or dtrace
>> or whatever. But LWLO
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
>>> I agree it'd be nicer if we had some better way than mere manual
>>> inspection to enforce proper use of spinlocks; but this change doesn't
>>> seem to me to move the ball
Robert Haas writes:
> On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
>> I agree it'd be nicer if we had some better way than mere manual
>> inspection to enforce proper use of spinlocks; but this change doesn't
>> seem to me to move the ball downfield by any meaningful distance.
> Well, my thou
On Mon, Jan 6, 2014 at 3:40 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane wrote:
>>> OTOH, the LWLock mechanism has been stable for long enough now that
>>> we can probably suppose this struct is no more subject to churn than
>>> any other widely-known one
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane wrote:
>>> -1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
>>> have anything whatsoever to do with enforcing the actual coding rule).
>
>> Hmm. I thought
Robert Haas writes:
> On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane wrote:
>> OTOH, the LWLock mechanism has been stable for long enough now that
>> we can probably suppose this struct is no more subject to churn than
>> any other widely-known one, so maybe that consideration is no longer
>> significa
Robert Haas writes:
> On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane wrote:
>> -1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
>> have anything whatsoever to do with enforcing the actual coding rule).
> Hmm. I thought that was a pretty well-aimed bullet myself; why do you
>
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane wrote:
> OTOH, the LWLock mechanism has been stable for long enough now that
> we can probably suppose this struct is no more subject to churn than
> any other widely-known one, so maybe that consideration is no longer
> significant.
On the whole, I'd say
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane wrote:
> Robert Haas writes:
>> Well, I took a look at this and it turns out not to be very hard, so
>> here's a patch. Currently, we allocate 3 semaphore per shared buffer
>> and a bunch of others, but the 3 per shared buffer dominates, so you
>> end up
Robert Haas writes:
> Well, I took a look at this and it turns out not to be very hard, so
> here's a patch. Currently, we allocate 3 semaphore per shared buffer
> and a bunch of others, but the 3 per shared buffer dominates, so you
> end up with ~49k spinlocks for the default of 128MB shared_buf
Andres Freund writes:
> On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
>> I seem to recall that there was some good reason for keeping all the
>> LWLocks in an array, back when the facility was first designed.
>> I'm too lazy to research the point right now, but you might want to
>> go back and loo
On Mon, Jan 6, 2014 at 11:22 AM, Tom Lane wrote:
> I think we can eliminate the first of those. Semaphores for spinlocks
> were a performance disaster fifteen years ago, and the situation has
> surely only gotten worse since then. I do, however, believe that
> --disable-spinlocks has some use wh
On Mon, Jan 6, 2014 at 9:59 AM, Tom Lane wrote:
> Andres Freund writes:
>> On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
>>> That assumes that you never hold more than one spinlock at a time, otherwise
>>> you can get deadlocks. I think that assumptions holds currently, because
>>> acqu
On 2014-01-06 09:59:49 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
> >> That assumes that you never hold more than one spinlock at a time,
> >> otherwise
> >> you can get deadlocks. I think that assumptions holds currently, because
>
Robert Haas writes:
> I guess the question boils down to: why are we keeping
> --disable-spinlocks around? If we're expecting that people might
> really use it for serious work, then it needs to remain and it needs
> to work with dynamic shared memory. If we're expecting that people
> might use
Andres Freund writes:
> On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
>> That assumes that you never hold more than one spinlock at a time, otherwise
>> you can get deadlocks. I think that assumptions holds currently, because
>> acquiring two spinlocks at a time would be bad on performan
* Robert Haas (robertmh...@gmail.com) wrote:
> Another idea is to include some identifying information in the lwlock.
That was my immediate reaction to this issue...
> For example, each lwlock could have a char *name in it, and we could
> print the name. In theory, this could be a big step forw
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
> On 01/05/2014 07:56 PM, Robert Haas wrote:
> >Right now, storing spinlocks in dynamic shared memory *almost* works,
> >but there are problems with --disable-spinlocks. In that
> >configuration, we use semaphores to simulate spinlocks. Ever
On 01/05/2014 07:56 PM, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
someone calls SpinLockInit(), it's going to allocate a new
On Sun, Jan 5, 2014 at 2:06 PM, Tom Lane wrote:
> I seem to recall that there was some good reason for keeping all the
> LWLocks in an array, back when the facility was first designed.
> I'm too lazy to research the point right now, but you might want to
> go back and look at the archives around w
On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
> Robert Haas writes:
> > For what it's worth, my vote is currently for #2. I can't think of
> > many interesting to do with dynamic shared memory without having at
> > least spinlocks, so I don't think we'd be losing much. #1 seems
> > needlessly un
Robert Haas writes:
> For what it's worth, my vote is currently for #2. I can't think of
> many interesting to do with dynamic shared memory without having at
> least spinlocks, so I don't think we'd be losing much. #1 seems
> needlessly unfriendly, #3 seems like a lot of work for not much, and
On 2014-01-05 12:56:05 -0500, Robert Haas wrote:
> Right now, storing spinlocks in dynamic shared memory *almost* works,
> but there are problems with --disable-spinlocks. In that
> configuration, we use semaphores to simulate spinlocks. Every time
> someone calls SpinLockInit(), it's going to al
One of the things that you might want to do with dynamic shared memory
is store a lock in it. In fact, my bet is that almost everything that
uses dynamic shared memory will want to do precisely that, because, of
course, it's dynamic *shared* memory, which means that it is
concurrently accessed by
On Tue, Dec 10, 2013 at 6:26 PM, Tom Lane wrote:
> Noah Misch writes:
>> On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
>>> Let's not add more cases like that, if we can avoid it.
>
>> Only if we can avoid it for a modicum of effort and feature compromise.
>> You're asking fo
Noah Misch writes:
> On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
>> Let's not add more cases like that, if we can avoid it.
> Only if we can avoid it for a modicum of effort and feature compromise.
> You're asking for PostgreSQL to reshape its use of persistent resources s
On 2013-12-10 18:12:53 -0500, Noah Misch wrote:
> On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
> > On 12/10/2013 07:27 PM, Noah Misch wrote:
> > >On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
> > Let's not add more cases like that, if we can avoid it.
>
On Tue, Dec 10, 2013 at 07:50:20PM +0200, Heikki Linnakangas wrote:
> On 12/10/2013 07:27 PM, Noah Misch wrote:
> >On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
> >>>On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
> >>> wrote:
> * As discussed in the "Something fishy
On 12/10/2013 07:27 PM, Noah Misch wrote:
On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
On 11/20/2013 09:58 PM, Robert Haas wrote:
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
wrote:
* As discussed in the "Something fishy happening on frogmouth" thread, I
don't l
On Thu, Dec 05, 2013 at 06:12:48PM +0200, Heikki Linnakangas wrote:
> On 11/20/2013 09:58 PM, Robert Haas wrote:
>> On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
>> wrote:
>>> * As discussed in the "Something fishy happening on frogmouth" thread, I
>>> don't like the fact that the dynamic s
On Thu, Dec 5, 2013 at 4:06 PM, Heikki Linnakangas
wrote:
>> That's a very interesting idea. I've been thinking that we needed to
>> preserve the property that new workers could attach to the shared
>> memory segment at any time, but that might not be necessary in all
>> case. We could introduce
On 12/05/2013 09:34 PM, Robert Haas wrote:
On Thu, Dec 5, 2013 at 11:12 AM, Heikki Linnakangas
wrote:
One idea is to create the shared memory object with shm_open, and wait until
all the worker processes that need it have attached to it. Then,
shm_unlink() it, before using it for anything. That
On Thu, Dec 5, 2013 at 11:12 AM, Heikki Linnakangas
wrote:
> Hmm. Those two use cases are quite different. For message-passing, you want
> a lot of small queues, but for parallel sort, you want one huge allocation.
> I wonder if we shouldn't even try a one-size-fits-all solution.
>
> For message-p
On 11/20/2013 09:58 PM, Robert Haas wrote:
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
wrote:
How many allocations? What size will they have have typically, minimum and
maximum?
The facility is intended to be general, so the answer could vary
widely by application. The testing that I
On Sat, Nov 23, 2013 at 4:21 PM, Jeremy Harris wrote:
> Its performance shines on partially- or reverse-sorted input.
Search the archives for the work I did on timsort support a while
back. A patch was posted, that had some impressive results provided
you just considered the number of comparisons
On 20/11/13 19:58, Robert Haas wrote:
Parallel sort, and then parallel other stuff. Eventually general
parallel query.
I have recently updated https://wiki.postgresql.org/wiki/Parallel_Sort
and you may find that interesting/helpful as a statement of intent.
I've been playing with an internal
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
wrote:
> I'm trying to catch up on all of this dynamic shared memory stuff. A bunch
> of random questions and complaints:
>
> What kind of usage are we trying to cater with the dynamic shared memory?
Parallel sort, and then parallel other stuff.
I'm trying to catch up on all of this dynamic shared memory stuff. A
bunch of random questions and complaints:
What kind of usage are we trying to cater with the dynamic shared
memory? How many allocations? What size will they have have typically,
minimum and maximum? I looked at the message q
On Mon, Oct 14, 2013 at 11:11 AM, Amit Kapila wrote:
> During test, I found one issue in Windows implementation.
>
> During startup, when it tries to create new control segment for
> dynamic shared memory, it loops until an unused identifier is found,
> but for Windows implementation (dsm_impl_win
On Mon, Oct 14, 2013 at 5:11 PM, Robert Haas wrote:
> On Sun, Oct 13, 2013 at 3:07 AM, Amit Kapila wrote:
>> 1. Do you think we should add information about pg_dynshmem file at link:
>> http://www.postgresql.org/docs/devel/static/storage-file-layout.html
>> It contains information about all files
On Sun, Oct 13, 2013 at 3:07 AM, Amit Kapila wrote:
> 1. Do you think we should add information about pg_dynshmem file at link:
> http://www.postgresql.org/docs/devel/static/storage-file-layout.html
> It contains information about all files/folders in data directory
>
> 2.
> +/*
> + * Forget that
On Wed, Oct 9, 2013 at 1:10 AM, Robert Haas wrote:
> On Thu, Sep 26, 2013 at 9:27 AM, Noah Misch wrote:
>>> > "There's no data corruption problem if we proceed" - but there likely
>>> > has been one leading to the current state.
>>
>> +1 for making this one a PANIC, though. With startup behind u
On Thu, Oct 10, 2013 at 7:59 PM, Josh Berkus wrote:
>>> (5) Default to POSIX, and allow for SysV as a compile-time option for
>>> platforms with poor POSIX memory support.
>>
>> OK, I did #5. Let's see how that works.
>
> Andrew pointed out upthread that, since platforms are unlikely to change
>
>> (5) Default to POSIX, and allow for SysV as a compile-time option for
>> platforms with poor POSIX memory support.
>
> OK, I did #5. Let's see how that works.
Andrew pointed out upthread that, since platforms are unlikely to change
what they support dynamically, we could do this at initdb ti
On Thu, Oct 10, 2013 at 5:14 PM, Josh Berkus wrote:
>>> Doesn't #2 negate all advantages of this effort? Bringing sysv
>>> management back on the table seems like a giant step backwards -- or
>>> am I missing something?
>>
>> Not unless there's no difference between "the default" and "the only op
On 2013-10-10 12:13:20 -0400, Robert Haas wrote:
> and on smew (Debian GNU/Linux 6.0), it
> fails with "Function not implemented", which according to a forum
> post[1] I found probably indicates that /dev/shm doesn't mount a tmpfs
> on that box.
It would be nice to get confirmed what the reason fo
Robert,
>> Doesn't #2 negate all advantages of this effort? Bringing sysv
>> management back on the table seems like a giant step backwards -- or
>> am I missing something?
>
> Not unless there's no difference between "the default" and "the only option".
Well, per our earlier discussion about "
On Thu, Oct 10, 2013 at 4:00 PM, Merlin Moncure wrote:
>> (2) Default to using System V shared memory. If people want POSIX
>> shared memory, let them change the default.
>
> Doesn't #2 negate all advantages of this effort? Bringing sysv
> management back on the table seems like a giant step bac
On Thu, Oct 10, 2013 at 12:13:20PM -0400, Robert Haas wrote:
> Since, as has been previously discussed in this forum on multiple
> occasions [citation needed], the default System V shared memory limits
> are absurdly low on many systems, the dynamic shared memory patch
> defaults to POSIX shared me
On 10/10/2013 02:45 PM, Robert Haas wrote:
On Thu, Oct 10, 2013 at 2:36 PM, Peter Geoghegan wrote:
On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas wrote:
(2) Default to using System V shared memory. If people want POSIX
shared memory, let them change the default.
After some consideration, I th
On Thu, Oct 10, 2013 at 11:13 AM, Robert Haas wrote:
> Since, as has been previously discussed in this forum on multiple
> occasions [citation needed], the default System V shared memory limits
> are absurdly low on many systems, the dynamic shared memory patch
> defaults to POSIX shared memory, w
* Robert Haas (robertmh...@gmail.com) wrote:
> On Thu, Oct 10, 2013 at 2:36 PM, Peter Geoghegan wrote:
> > On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas wrote:
> >> (2) Default to using System V shared memory. If people want POSIX
> >> shared memory, let them change the default.
> >
> >> After so
On 10/10/2013 02:35 PM, Robert Haas wrote:
On Thu, Oct 10, 2013 at 2:21 PM, Andrew Dunstan wrote:
Other votes? Other ideas?
5) test and set it in initdb.
Are you advocating for that option, or just calling out that it's
possible? I'd say that's closely related to option #3, except at
initd
On Thu, Oct 10, 2013 at 2:36 PM, Peter Geoghegan wrote:
> On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas wrote:
>> (2) Default to using System V shared memory. If people want POSIX
>> shared memory, let them change the default.
>
>> After some consideration, I think my vote is for option #2.
>
> W
On Thu, Oct 10, 2013 at 9:13 AM, Robert Haas wrote:
> (2) Default to using System V shared memory. If people want POSIX
> shared memory, let them change the default.
> After some consideration, I think my vote is for option #2.
Wouldn't that become the call of packagers? Wasn't there already so
On Thu, Oct 10, 2013 at 2:21 PM, Andrew Dunstan wrote:
>> Other votes? Other ideas?
>
> 5) test and set it in initdb.
Are you advocating for that option, or just calling out that it's
possible? I'd say that's closely related to option #3, except at
initdb time rather than run-time - and it migh
1 - 100 of 137 matches
Mail list logo