On Mon, Dec 21, 2015 at 4:50 PM, Jesper Pedersen
wrote:
> On 12/18/2015 01:16 PM, Robert Haas wrote:
>>
>> Is this just for informational purposes, or is this something you are
>> looking to have committed? I originally thought the former, but now
>> I'm wondering if
On 12/18/2015 01:16 PM, Robert Haas wrote:
Is this just for informational purposes, or is this something you are
looking to have committed? I originally thought the former, but now
I'm wondering if I misinterpreted your intent. I have a hard time
getting excited about committing something that
On Wed, Dec 16, 2015 at 5:02 AM, Jesper Pedersen
wrote:
> On 09/16/2015 12:44 PM, Jesper Pedersen wrote:
>>
>> So, I think there is some value in keeping this information separate.
>>
>
> Just a rebased patch after the excellent LWLockTranche work.
>
> And a new sample
On 09/15/2015 03:51 PM, Jesper Pedersen wrote:
It
would be nice to get a better sense of how *long* we block on various
locks. It's hard to tell whether some other lock might be have fewer
blocking events but for a much longer average duration.
I did a run with the attached patch,
On 09/16/2015 10:13 AM, Jesper Pedersen wrote:
On 09/15/2015 03:51 PM, Jesper Pedersen wrote:
It
would be nice to get a better sense of how *long* we block on various
locks. It's hard to tell whether some other lock might be have fewer
blocking events but for a much longer average duration.
On 09/16/2015 10:25 AM, Jesper Pedersen wrote:
Likely from LWLOCK_STATS' own lwlock.c::print_lwlock_stats, which would
make sense.
Version 3 attached, which ignores entries from MainLWLockArray[0].
Best regards,
Jesper
*** /tmp/NTwtmh_lwlock.c 2015-09-16 10:34:02.955957192 -0400
---
Hi,
On 09/16/2015 12:26 PM, Andres Freund wrote:
On 2015-09-16 10:37:43 -0400, Jesper Pedersen wrote:
#ifdef LWLOCK_STATS
lwstats->spin_delay_count += SpinLockAcquire(>mutex);
+
+ /*
+* We scan the list of waiters from the back in order to find
+* out how many
Hi,
On 2015-09-16 10:37:43 -0400, Jesper Pedersen wrote:
> #ifdef LWLOCK_STATS
> lwstats->spin_delay_count += SpinLockAcquire(>mutex);
> +
> + /*
> + * We scan the list of waiters from the back in order to find
> + * out how many of the same lock type are waiting for a
On Wed, Sep 16, 2015 at 10:13 AM, Jesper Pedersen
wrote:
> Doing block_time / block_count basically only shows "main 0" -- its called
> "unassigned:0"; it also shows up in the max exclusive report. Where it is
> coming from is another question, since it shouldn't be in
On Wed, Sep 16, 2015 at 1:21 AM, Jesper Pedersen
wrote:
>
> On 09/15/2015 03:42 PM, Robert Haas wrote:
>>
>> I haven't really, just the email. But it seems like a neat concept.
>> So if I understand this correctly:
>>
>> 74.05% of spin delays are attributable to
On Tue, Sep 15, 2015 at 10:27 AM, Jesper Pedersen
wrote:
> Hi,
>
> I have been using the attached patch to look at how each LWLock relates to
> each other in various types of runs.
>
> The patch adds the following fields to a LWLOCK_STATS build:
>
> sh_acquire_max
On Tue, Sep 15, 2015 at 3:30 PM, Jesper Pedersen
wrote:
> X-axis is sort of "up in the air" with flame graphs -- similar call stacks
> are grouped together, and here it is the queue size.
>
> Y-axis is the lock queue size -- e.g. CLogControlLock is "max'ed" out, since
On 09/15/2015 03:42 PM, Robert Haas wrote:
I haven't really, just the email. But it seems like a neat concept.
So if I understand this correctly:
74.05% of spin delays are attributable to CLogControLock, 20.01% to
ProcArrayLock, and 3.39% to XidGenLock. Incredibly, the queue length
reaches
On 09/15/2015 03:11 PM, Robert Haas wrote:
If there is an interest I'll add the patch to the next CommitFest.
Thanks for considering, and any feedback is most welcomed.
Seems neat, but I can't understand how to read the flame graphs.
X-axis is sort of "up in the air" with flame graphs --
14 matches
Mail list logo