On Sun, Feb 28, 2010 at 1:06 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Robert Haas wrote:
This might be my fault, so I apologize for killing your enthusiasm. I
think when I get wrapped up in a CommitFest (and especially during the
second half) I get wound up in determining
Greg Smith wrote:
Bruce Momjian wrote:
What happened to this patch?
Returned with feedback in October after receiving a lot of review, no
updated version submitted since then:
https://commitfest.postgresql.org/action/patch_view?id=98
Hmm - I would say a bit of review rather than a
Mark Kirkwood mark.kirkw...@catalyst.net.nz writes:
I'd also like to take the opportunity to express a little frustration
about the commitfest business - really all I wanted was the patch
*reviewed* as WIP - it seemed that in order to do that I needed to enter
it into the various
Mark Kirkwood wrote:
Greg Smith wrote:
Returned with feedback in October after receiving a lot of review, no
updated version submitted since then:
https://commitfest.postgresql.org/action/patch_view?id=98
Hmm - I would say a bit of review rather than a lot :-)
It looks like you got
Gokulakannan Somasundaram wrote:
Statspack works by the following way -a) it takes a copy of important
catalog tables(pg_ tables) which store the information like wait
statistics against wait events, i/o statistics cumulative against each
SQL_Hash( and SQL_Text), whether a particular plan went
Greg Smith wrote:
Mark Kirkwood wrote:
Greg Smith wrote:
Returned with feedback in October after receiving a lot of review,
no updated version submitted since then:
https://commitfest.postgresql.org/action/patch_view?id=98
Hmm - I would say a bit of review rather than a lot :-)
It looks
On 2/27/10 1:04 PM, Mark Kirkwood wrote:
LOL - I said a bit - it was only a little, connected with the commit vs
review confusion. I think I just got caught in the bedding in time for
the new development processes, I was advised to add it to the
commitfests, and them advised that it should
Mark Kirkwood wrote:
I don't mean to be ungrateful about the actual reviews at all - and I
did value the feedback received (which I hope was reasonably clear in
the various replies I sent). I sense a bit of attacking the messenger
in your tone...
I thought there was a moderately big
Greg Smith wrote:
While I was in there I also added some more notes on my personal top
patch submission peeve, patches whose purpose in life is to improve
performance that don't come with associated easy to run test cases,
including a sample of that test running on a system that shows the
Mark Kirkwood wrote:
While I completely agree that the submitter should be required to
supply a test case and their results, so the rest of us can try to
reproduce said improvement - rejecting the patch out of hand is a bit
harsh I feel - Hey, they may just have forgotten to supply these
On Sat, Feb 27, 2010 at 5:40 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
I'd also like to take the opportunity to express a little frustration about
the commitfest business - really all I wanted was the patch *reviewed* as
WIP - it seemed that in order to do that I needed to enter it
On Sat, Feb 27, 2010 at 6:22 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Greg Smith wrote:
While I was in there I also added some more notes on my personal top patch
submission peeve, patches whose purpose in life is to improve performance
that don't come with associated easy to run
Robert Haas wrote:
This might be my fault, so I apologize for killing your enthusiasm. I
think when I get wrapped up in a CommitFest (and especially during the
second half) I get wound up in determining whether or not things are
going to get applied and tend to give short shrift to thinks
What happened to this patch?
---
Mark Kirkwood wrote:
Where I work they make extensive use of Postgresql. One of the things
they typically want to know about are lock waits. Out of the box in
there is not much in the
Bruce Momjian wrote:
What happened to this patch?
Returned with feedback in October after receiving a lot of review, no
updated version submitted since then:
https://commitfest.postgresql.org/action/patch_view?id=98
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training,
I am just adding my two cents, please ignore it, if its totally irrelevant.
While we do performance testing/tuning of any applications, the important
things, a standard monitoring requirement from a database are
a) Different type of wait events and the time spent in each of them
b) Top ten Queries
On Sun, Oct 4, 2009 at 4:14 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Mon, Sep 28, 2009 at 12:14 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Sat, Aug 8, 2009 at 7:47 PM, Mark Kirkwood mar...@paradise.net.nz wrote:
Patch with max(wait time).
Still TODO
- amalgamate
On Mon, Sep 28, 2009 at 12:14 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Sat, Aug 8, 2009 at 7:47 PM, Mark Kirkwood mar...@paradise.net.nz wrote:
Patch with max(wait time).
Still TODO
- amalgamate individual transaction lock waits
- redo (rather ugly) temporary
Jaime Casanova wrote:
it applies with some hunks, compiles fine and seems to work...
i'm still not sure this is what we need, some more comments could be helpful.
Yeah, that's the big question. Are the current capabilities (logging 'em
for waits deadlock timeout + dtrace hooks) enough?
Jeff Janes wrote:
The total wait time is equal to the max wait time (which are both
equal to l_end)?
One or both of those has to end up being wrong. At this stage, is
l_end supposed to be the last wait time, or the cumulative wait time?
Hmm - I may well have fat fingered the arithmetic,
On Mon, Sep 28, 2009 at 12:14 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Sat, Aug 8, 2009 at 7:47 PM, Mark Kirkwood mar...@paradise.net.nz wrote:
Patch with max(wait time).
Still TODO
- amalgamate individual transaction lock waits
- redo (rather ugly) temporary
On Sat, Aug 8, 2009 at 7:47 PM, Mark Kirkwood mar...@paradise.net.nz wrote:
Patch with max(wait time).
Still TODO
- amalgamate individual transaction lock waits
- redo (rather ugly) temporary pg_stat_lock_waits in a form more like
pg_locks
This version has the individual transaction
Stephen Frost wrote:
Mark,
Your last email on this patch, from August 9th, indicates that you've
still got TODO: redo pg_stat_lock_waits Has you updated this
patch since then?
Stephen,
No - that is still a TODO for me - real life getting in the way :-)
Cheers
Mark
--
I have this patch, if you're interested.
LWLock Instrumentation Patch
- counts locks and waits in shared and exclusive mode
- for selected locks, measures wait and hold times
- for selected locks, displays a histogram of wait and hold times
- information is printed at backend exit
Pierre,
Configurable by #define's in lwlock.c
Given that we already have dtrace/systemtap probes around the lwlocks,
is there some way you could use those instead of extra #defines?
--
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com
--
Sent via pgsql-hackers mailing list
Mark,
Your last email on this patch, from August 9th, indicates that you've
still got TODO: redo pg_stat_lock_waits Has you updated this
patch since then?
Thanks!
Stephen
signature.asc
Description: Digital signature
Mark Kirkwood wrote:
Mark Kirkwood wrote:
Jaime Casanova wrote:
On Fri, Jul 17, 2009 at 3:38 AM, Mark
Kirkwoodmar...@paradise.net.nz wrote:
With respect to the sum of wait times being not very granular, yes
- quite
true. I was thinking it is useful to be able to answer the question
'where
Mark Kirkwood wrote:
Jaime Casanova wrote:
On Fri, Jul 17, 2009 at 3:38 AM, Mark
Kirkwoodmar...@paradise.net.nz wrote:
With respect to the sum of wait times being not very granular, yes -
quite
true. I was thinking it is useful to be able to answer the question
'where
is my wait time being
Tom Lane wrote:
Mark Kirkwood mar...@paradise.net.nz writes:
Yeah, enabling log_lock_waits is certainly another approach, however you
currently miss out on those that are deadlock_timeout - and
potentially they could be the source of your problem (i.e millions of
waits all
Mark Kirkwood wrote:
Jaime Casanova wrote:
On Fri, Jul 17, 2009 at 3:38 AM, Mark
Kirkwoodmar...@paradise.net.nz wrote:
With respect to the sum of wait times being not very granular, yes -
quite
true. I was thinking it is useful to be able to answer the question
'where
is my wait time being
Jaime Casanova wrote:
On Fri, Jul 17, 2009 at 3:38 AM, Mark Kirkwoodmar...@paradise.net.nz wrote:
With respect to the sum of wait times being not very granular, yes - quite
true. I was thinking it is useful to be able to answer the question 'where
is my wait time being spent' - but it hides
Mark Kirkwood mar...@paradise.net.nz writes:
Yeah, enabling log_lock_waits is certainly another approach, however you
currently miss out on those that are deadlock_timeout - and
potentially they could be the source of your problem (i.e millions of
waits all deadlock_timeout but taken
On Fri, Jul 17, 2009 at 3:38 AM, Mark Kirkwoodmar...@paradise.net.nz wrote:
With respect to the sum of wait times being not very granular, yes - quite
true. I was thinking it is useful to be able to answer the question 'where
is my wait time being spent' - but it hides cases like the one you
Jaime Casanova wrote:
i did it myself, i think this is something we need...
this compile and seems to work... something i was wondering is that
having the total time of lock waits is not very accurate because we
can have 9 lock waits awaiting 1 sec each and one awaiting for 1
minute... simply
On Sun, Jan 25, 2009 at 6:57 PM, Mark Kirkwoodmar...@paradise.net.nz wrote:
So here is my initial attempt at this, at this point merely to spark
discussion (attached patch)
this patch doesn't apply cleanly to head... can you update it, please?
--
Atentamente,
Jaime Casanova
Soporte y
Where I work they make extensive use of Postgresql. One of the things
they typically want to know about are lock waits. Out of the box in
there is not much in the way of tracking for such, particularly in older
versions. The view pg_stats is fine for stuff happening *now*, but
typically I find
36 matches
Mail list logo