On Mon, Feb 10, 2014 at 7:17 PM, Kohei KaiGai wrote:
> Does it make another problem if dsm_detach() also releases lwlocks
> being allocated on the dsm segment to be released?
> Lwlocks being held is tracked in the held_lwlocks[] array; its length is
> usually 100. In case when dsm_detach() is call
2014-02-08 4:52 GMT+09:00 Robert Haas :
> On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas wrote:
>> One idea I just had is to improve the dsm_toc module so that it can
>> optionally set up a tranche of lwlocks for you, and provide some
>> analogues of RequestAddinLWLocks and LWLockAssign for that ca
On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas wrote:
> One idea I just had is to improve the dsm_toc module so that it can
> optionally set up a tranche of lwlocks for you, and provide some
> analogues of RequestAddinLWLocks and LWLockAssign for that case. That
> would probably make this quite a
On Thu, Jan 23, 2014 at 11:10 AM, Robert Haas wrote:
> On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund
> wrote:
>> On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
>>> On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
>>> > Andres Freund writes:
>>> >> Shouldn't we introduce a typedef LWLock*
On Wed, Jan 22, 2014 at 12:42 PM, Andres Freund wrote:
> On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
>> On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
>> > Andres Freund writes:
>> >> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
>> >> breaking external code us
2014/1/23 Andres Freund :
> On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
>> Isn't it necessary to have an interface to initialize LWLock structure being
>> allocated on a dynamic shared memory segment?
>> Even though LWLock structure is exposed at lwlock.h, we have no common
>> way to initiali
On 2014-01-23 23:03:40 +0900, Kohei KaiGai wrote:
> Isn't it necessary to have an interface to initialize LWLock structure being
> allocated on a dynamic shared memory segment?
> Even though LWLock structure is exposed at lwlock.h, we have no common
> way to initialize it.
There's LWLockInitialize
Isn't it necessary to have an interface to initialize LWLock structure being
allocated on a dynamic shared memory segment?
Even though LWLock structure is exposed at lwlock.h, we have no common
way to initialize it.
How about to have a following function?
void
InitLWLock(LWLock *lock)
{
SpinL
On 2014-01-22 12:40:34 -0500, Robert Haas wrote:
> On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
> > Andres Freund writes:
> >> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
> >> breaking external code using lwlocks?
> >
> > +1, in fact there's probably no reason to
On Wed, Jan 22, 2014 at 12:11 PM, Tom Lane wrote:
> Andres Freund writes:
>> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
>> breaking external code using lwlocks?
>
> +1, in fact there's probably no reason to touch most *internal* code using
> that type name either.
I
Andres Freund writes:
> Shouldn't we introduce a typedef LWLock* LWLockid; or something to avoid
> breaking external code using lwlocks?
+1, in fact there's probably no reason to touch most *internal* code using
that type name either.
regards, tom lane
--
Sent via pgsq
On 2014-01-10 13:11:32 -0500, Robert Haas wrote:
> OK, I've implemented this: here's what I believe to be a complete
> patch, based on the previous lwlock-pointers.patch but now handling
> LOCK_DEBUG and TRACE_LWLOCKS and dtrace and a bunch of other loose
> ends. I think this should be adequate fo
(2014/01/22 1:37), Robert Haas wrote:
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei wrote:
I briefly checked the patch. Most of lines are mechanical replacement
from LWLockId to LWLock *, and compiler didn't claim anything with
-Wall -Werror option.
My concern is around LWLockTranche mechanis
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei wrote:
> I briefly checked the patch. Most of lines are mechanical replacement
> from LWLockId to LWLock *, and compiler didn't claim anything with
> -Wall -Werror option.
>
> My concern is around LWLockTranche mechanism. Isn't it too complicated
> to
(2014/01/11 3:11), Robert Haas wrote:
On Mon, Jan 6, 2014 at 5:50 PM, Robert Haas wrote:
This is only part of the solution, of course: a complete solution will
involve making the hash table key something other than the lock ID.
What I'm thinking we can do is making the lock ID consist of two
un
On Tue, Jan 7, 2014 at 6:54 AM, Andres Freund wrote:
>> Maybe it makes sense to have such a check #ifdef'ed out on most builds
>> to avoid extra overhead, but not having any check at all just because we
>> trust the review process too much doesn't strike me as the best of
>> ideas.
>
> I don't thi
On 2014-01-06 21:35:22 -0300, Alvaro Herrera wrote:
> Jim Nasby escribió:
> > On 1/6/14, 2:59 PM, Robert Haas wrote:
> > >On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
>
> > >>The point I'm making is that no such code should get past review,
> > >>whether it's got an obvious performance problem
Jim Nasby escribió:
> On 1/6/14, 2:59 PM, Robert Haas wrote:
> >On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
> >>The point I'm making is that no such code should get past review,
> >>whether it's got an obvious performance problem or not.
> >
> >Sure, I agree, but we all make mistakes. It's j
On 1/6/14, 2:59 PM, Robert Haas wrote:
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
Robert Haas writes:
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
I agree it'd be nicer if we had some better way than mere manual
inspection to enforce proper use of spinlocks; but this change doesn't
On Mon, Jan 6, 2014 at 9:48 AM, Stephen Frost wrote:
>> None of these ideas are a complete solution for LWLOCK_STATS. In the
>> other three cases noted above, we only need an identifier for the lock
>> "instantaneously", so that we can pass it off to the logger or dtrace
>> or whatever. But LWLO
On Mon, Jan 6, 2014 at 3:57 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
>>> I agree it'd be nicer if we had some better way than mere manual
>>> inspection to enforce proper use of spinlocks; but this change doesn't
>>> seem to me to move the ball
Robert Haas writes:
> On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
>> I agree it'd be nicer if we had some better way than mere manual
>> inspection to enforce proper use of spinlocks; but this change doesn't
>> seem to me to move the ball downfield by any meaningful distance.
> Well, my thou
On Mon, Jan 6, 2014 at 3:40 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane wrote:
>>> OTOH, the LWLock mechanism has been stable for long enough now that
>>> we can probably suppose this struct is no more subject to churn than
>>> any other widely-known one
On Mon, Jan 6, 2014 at 3:32 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane wrote:
>>> -1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
>>> have anything whatsoever to do with enforcing the actual coding rule).
>
>> Hmm. I thought
Robert Haas writes:
> On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane wrote:
>> OTOH, the LWLock mechanism has been stable for long enough now that
>> we can probably suppose this struct is no more subject to churn than
>> any other widely-known one, so maybe that consideration is no longer
>> significa
Robert Haas writes:
> On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane wrote:
>> -1 for the any_spinlock_held business (useless overhead IMO, as it doesn't
>> have anything whatsoever to do with enforcing the actual coding rule).
> Hmm. I thought that was a pretty well-aimed bullet myself; why do you
>
On Mon, Jan 6, 2014 at 1:55 PM, Tom Lane wrote:
> OTOH, the LWLock mechanism has been stable for long enough now that
> we can probably suppose this struct is no more subject to churn than
> any other widely-known one, so maybe that consideration is no longer
> significant.
On the whole, I'd say
On Mon, Jan 6, 2014 at 2:48 PM, Tom Lane wrote:
> Robert Haas writes:
>> Well, I took a look at this and it turns out not to be very hard, so
>> here's a patch. Currently, we allocate 3 semaphore per shared buffer
>> and a bunch of others, but the 3 per shared buffer dominates, so you
>> end up
Robert Haas writes:
> Well, I took a look at this and it turns out not to be very hard, so
> here's a patch. Currently, we allocate 3 semaphore per shared buffer
> and a bunch of others, but the 3 per shared buffer dominates, so you
> end up with ~49k spinlocks for the default of 128MB shared_buf
Andres Freund writes:
> On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
>> I seem to recall that there was some good reason for keeping all the
>> LWLocks in an array, back when the facility was first designed.
>> I'm too lazy to research the point right now, but you might want to
>> go back and loo
On Mon, Jan 6, 2014 at 11:22 AM, Tom Lane wrote:
> I think we can eliminate the first of those. Semaphores for spinlocks
> were a performance disaster fifteen years ago, and the situation has
> surely only gotten worse since then. I do, however, believe that
> --disable-spinlocks has some use wh
On Mon, Jan 6, 2014 at 9:59 AM, Tom Lane wrote:
> Andres Freund writes:
>> On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
>>> That assumes that you never hold more than one spinlock at a time, otherwise
>>> you can get deadlocks. I think that assumptions holds currently, because
>>> acqu
On 2014-01-06 09:59:49 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
> >> That assumes that you never hold more than one spinlock at a time,
> >> otherwise
> >> you can get deadlocks. I think that assumptions holds currently, because
>
Robert Haas writes:
> I guess the question boils down to: why are we keeping
> --disable-spinlocks around? If we're expecting that people might
> really use it for serious work, then it needs to remain and it needs
> to work with dynamic shared memory. If we're expecting that people
> might use
Andres Freund writes:
> On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
>> That assumes that you never hold more than one spinlock at a time, otherwise
>> you can get deadlocks. I think that assumptions holds currently, because
>> acquiring two spinlocks at a time would be bad on performan
* Robert Haas (robertmh...@gmail.com) wrote:
> Another idea is to include some identifying information in the lwlock.
That was my immediate reaction to this issue...
> For example, each lwlock could have a char *name in it, and we could
> print the name. In theory, this could be a big step forw
On 2014-01-06 10:35:59 +0200, Heikki Linnakangas wrote:
> On 01/05/2014 07:56 PM, Robert Haas wrote:
> >Right now, storing spinlocks in dynamic shared memory *almost* works,
> >but there are problems with --disable-spinlocks. In that
> >configuration, we use semaphores to simulate spinlocks. Ever
On 01/05/2014 07:56 PM, Robert Haas wrote:
Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks. In that
configuration, we use semaphores to simulate spinlocks. Every time
someone calls SpinLockInit(), it's going to allocate a new
On Sun, Jan 5, 2014 at 2:06 PM, Tom Lane wrote:
> I seem to recall that there was some good reason for keeping all the
> LWLocks in an array, back when the facility was first designed.
> I'm too lazy to research the point right now, but you might want to
> go back and look at the archives around w
On 2014-01-05 14:06:52 -0500, Tom Lane wrote:
> Robert Haas writes:
> > For what it's worth, my vote is currently for #2. I can't think of
> > many interesting to do with dynamic shared memory without having at
> > least spinlocks, so I don't think we'd be losing much. #1 seems
> > needlessly un
Robert Haas writes:
> For what it's worth, my vote is currently for #2. I can't think of
> many interesting to do with dynamic shared memory without having at
> least spinlocks, so I don't think we'd be losing much. #1 seems
> needlessly unfriendly, #3 seems like a lot of work for not much, and
On 2014-01-05 12:56:05 -0500, Robert Haas wrote:
> Right now, storing spinlocks in dynamic shared memory *almost* works,
> but there are problems with --disable-spinlocks. In that
> configuration, we use semaphores to simulate spinlocks. Every time
> someone calls SpinLockInit(), it's going to al
One of the things that you might want to do with dynamic shared memory
is store a lock in it. In fact, my bet is that almost everything that
uses dynamic shared memory will want to do precisely that, because, of
course, it's dynamic *shared* memory, which means that it is
concurrently accessed by
43 matches
Mail list logo