On Mon, Feb 10, 2014 at 7:17 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
Does it make another problem if dsm_detach() also releases lwlocks
being allocated on the dsm segment to be released?
Lwlocks being held is tracked in the held_lwlocks[] array; its length is
usually 100. In case when
2014-02-08 4:52 GMT+09:00 Robert Haas robertmh...@gmail.com:
On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas robertmh...@gmail.com wrote:
One idea I just had is to improve the dsm_toc module so that it can
optionally set up a tranche of lwlocks for you, and provide some
analogues of
On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas robertmh...@gmail.com wrote:
One idea I just had is to improve the dsm_toc module so that it can
optionally set up a tranche of lwlocks for you, and provide some
analogues of RequestAddinLWLocks and LWLockAssign for that case. That
would probably
On Thu, Jan 23, 2014 at 11:10 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
way to initialize it.
How about to have a following function?
void
InitLWLock(LWLock *lock)
{
On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
way to initialize it.
There's LWLockInitialize()
2014/1/23 Andres Freund and...@2ndquadrant.com:
On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid;
On 2014-01-10 13:11:32 -0500, Robert Haas wrote:
OK, I've implemented this: here's what I believe to be a complete
patch, based on the previous lwlock-pointers.patch but now handling
LOCK_DEBUG and TRACE_LWLOCKS and dtrace and a bunch of other loose
ends. I think this should be adequate for
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
breaking external code using lwlocks?
+1, in fact there's probably no reason to touch most *internal* code using
that type name either.
regards, tom
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
breaking external code using lwlocks?
+1, in fact there's probably no reason to touch most *internal* code
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
breaking external code using lwlocks?
+1, in fact
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei kai...@ak.jp.nec.com wrote:
I briefly checked the patch. Most of lines are mechanical replacement
from LWLockId to LWLock *, and compiler didn't claim anything with
-Wall -Werror option.
My concern is around LWLockTranche mechanism. Isn't it too
(2014/01/22 1:37), Robert Haas wrote:
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei kai...@ak.jp.nec.com wrote:
I briefly checked the patch. Most of lines are mechanical replacement
from LWLockId to LWLock *, and compiler didn't claim anything with
-Wall -Werror option.
My concern is around
(2014/01/11 3:11), Robert Haas wrote:
On Mon, Jan 6, 2014 at 5:50 PM, Robert Haas robertmh...@gmail.com wrote:
This is only part of the solution, of course: a complete solution will
involve making the hash table key something other than the lock ID.
What I'm thinking we can do is making the
On 2014-01-06 21:35:22 -0300, Alvaro Herrera wrote:
Jim Nasby escribió:
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The point I'm making is that no such code should get past review,
whether it's got an obvious performance
On Tue, Jan 7, 2014 at 6:54 AM, Andres Freund and...@2ndquadrant.com wrote:
Maybe it makes sense to have such a check #ifdef'ed out on most builds
to avoid extra overhead, but not having any check at all just because we
trust the review process too much doesn't strike me as the best of
ideas.
On 01/05/2014 07:56 PM, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
someone calls SpinLockInit(), it's going to allocate a
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
On 01/05/2014 07:56 PM, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
* Robert Haas (robertmh...@gmail.com) wrote:
Another idea is to include some identifying information in the lwlock.
That was my immediate reaction to this issue...
For example, each lwlock could have a char *name in it, and we could
print the name. In theory, this could be a big step
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
That assumes that you never hold more than one spinlock at a time, otherwise
you can get deadlocks. I think that assumptions holds currently, because
acquiring two spinlocks at a time would be
Robert Haas robertmh...@gmail.com writes:
I guess the question boils down to: why are we keeping
--disable-spinlocks around? If we're expecting that people might
really use it for serious work, then it needs to remain and it needs
to work with dynamic shared memory. If we're expecting that
On 2014-01-06 09:59:49 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
That assumes that you never hold more than one spinlock at a time,
otherwise
you can get deadlocks. I think that assumptions holds currently,
On Mon, Jan 6, 2014 at 9:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
That assumes that you never hold more than one spinlock at a time, otherwise
you can get deadlocks. I think that assumptions
On Mon, Jan 6, 2014 at 11:22 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I think we can eliminate the first of those. Semaphores for spinlocks
were a performance disaster fifteen years ago, and the situation has
surely only gotten worse since then. I do, however, believe that
--disable-spinlocks
Andres Freund and...@2ndquadrant.com writes:
On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
I seem to recall that there was some good reason for keeping all the
LWLocks in an array, back when the facility was first designed.
I'm too lazy to research the point right now, but you might want to
Robert Haas robertmh...@gmail.com writes:
Well, I took a look at this and it turns out not to be very hard, so
here's a patch. Currently, we allocate 3 semaphore per shared buffer
and a bunch of others, but the 3 per shared buffer dominates, so you
end up with ~49k spinlocks for the default
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Well, I took a look at this and it turns out not to be very hard, so
here's a patch. Currently, we allocate 3 semaphore per shared buffer
and a bunch of others, but the 3 per shared
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
OTOH, the LWLock mechanism has been stable for long enough now that
we can probably suppose this struct is no more subject to churn than
any other widely-known one, so maybe that consideration is no longer
significant.
On the
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
-1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
have anything whatsoever to do with enforcing the actual coding rule).
Hmm. I thought that was a pretty
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
OTOH, the LWLock mechanism has been stable for long enough now that
we can probably suppose this struct is no more subject to churn than
any other widely-known one, so maybe that
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
-1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
have anything whatsoever to do with
On Mon, Jan 6, 2014 at 3:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
OTOH, the LWLock mechanism has been stable for long enough now that
we can probably suppose this struct is no more
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to enforce proper use of spinlocks; but this change doesn't
seem to me to move the ball downfield by any
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to enforce proper use of spinlocks; but
On Mon, Jan 6, 2014 at 9:48 AM, Stephen Frost sfr...@snowman.net wrote:
None of these ideas are a complete solution for LWLOCK_STATS. In the
other three cases noted above, we only need an identifier for the lock
instantaneously, so that we can pass it off to the logger or dtrace
or whatever.
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to
Jim Nasby escribió:
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The point I'm making is that no such code should get past review,
whether it's got an obvious performance problem or not.
Sure, I agree, but we all make mistakes.
On 2014-01-05 12:56:05 -0500, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
someone calls SpinLockInit(), it's going to
Robert Haas robertmh...@gmail.com writes:
For what it's worth, my vote is currently for #2. I can't think of
many interesting to do with dynamic shared memory without having at
least spinlocks, so I don't think we'd be losing much. #1 seems
needlessly unfriendly, #3 seems like a lot of work
On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
For what it's worth, my vote is currently for #2. I can't think of
many interesting to do with dynamic shared memory without having at
least spinlocks, so I don't think we'd be losing much. #1 seems
On Sun, Jan 5, 2014 at 2:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I seem to recall that there was some good reason for keeping all the
LWLocks in an array, back when the facility was first designed.
I'm too lazy to research the point right now, but you might want to
go back and look at the
42 matches
Mail list logo