On 07/27/2015 01:20 PM, Ildus Kurbangaliev wrote:
Hello.
In the attached patch I've made a refactoring for tranches.
The prefix for them was extended, and I've did a split of LWLockAssign
to two
functions (one with tranche and second for user defined LWLocks).
This needs some work in order to
On Mon, Jul 27, 2015 at 2:43 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Robert Haas wrote:
On Mon, Jul 27, 2015 at 2:32 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
I think this is already possible, is it not? You just have to look for
an identically-identified pg_locks
On Mon, Jul 27, 2015 at 2:32 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
David Rowley wrote:
I've not looked into the feasibility of it, but if it were also possible to
have a waiting_for column which would store the process ID of the process
that's holding a lock that this process is
Robert Haas wrote:
On Mon, Jul 27, 2015 at 2:32 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
I think this is already possible, is it not? You just have to look for
an identically-identified pg_locks entry with granted=true. That gives
you a PID and vxid/xid. You can self-join
David Rowley wrote:
I've not looked into the feasibility of it, but if it were also possible to
have a waiting_for column which would store the process ID of the process
that's holding a lock that this process is waiting on, then it would be
possible for some smart guy to write some code
On 7/27/15 1:46 PM, Robert Haas wrote:
On Mon, Jul 27, 2015 at 2:43 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Robert Haas wrote:
On Mon, Jul 27, 2015 at 2:32 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
I think this is already possible, is it not? You just have to look for
Hello.
In the attached patch I've made a refactoring for tranches.
The prefix for them was extended, and I've did a split of LWLockAssign
to two
functions (one with tranche and second for user defined LWLocks).
--
Ildus Kurbangaliev
Postgres Professional: http://www.postgrespro.com
The
On Sat, Jul 25, 2015 at 10:30 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
On Jul 24, 2015, at 10:02 PM, Robert Haas robertmh...@gmail.com wrote:
Also, the patch should not invent a new array similar but not quite
identical to LockTagTypeNames[].
This is goofy:
+
On Jul 24, 2015, at 10:02 PM, Robert Haas robertmh...@gmail.com wrote:
Also, the patch should not invent a new array similar but not quite
identical to LockTagTypeNames[].
This is goofy:
+ if (tranche_id 0)
+ result-tranche = tranche_id;
+ else
+
On Fri, Jul 24, 2015 at 1:12 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
On Jul 24, 2015, at 7:26 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
2.
+const char *
+pgstat_get_wait_event_name(uint8 classId, uint8 eventId)
{
..
}
I don't understand why a
On Jul 24, 2015, at 7:26 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Jul 24, 2015 at 12:31 AM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
Hello.
I’ve changed the previous patch. `group` field in LWLock is removed, so the
size of LWLock will not increase.
Instead
On Thu, Jul 23, 2015 at 3:01 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
Hello.
I’ve changed the previous patch. `group` field in LWLock is removed, so the
size of LWLock will not increase.
Instead of the `group` field I've created new tranches for LWLocks from
On Fri, Jul 24, 2015 at 12:31 AM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
Hello.
I’ve changed the previous patch. `group` field in LWLock is removed, so
the size of LWLock will not increase.
Instead of the `group` field I've created new tranches for LWLocks from
Hello.
I’ve changed the previous patch. `group` field in LWLock is removed, so the
size of LWLock will not increase.
Instead of the `group` field I've created new tranches for LWLocks from
MainLWLocksArray. This allowed to remove a loop
from the previous version of the patch.
Now the names for
On 07/23/2015 05:57 AM, Kyotaro HORIGUCHI wrote:
At Wed, 22 Jul 2015 17:50:35 +0300, Ildus Kurbangalievi.kurbangal...@postgrespro.ru
wrote in55afadbb.9090...@postgrespro.ru
On 07/22/2015 09:10 AM, Kyotaro HORIGUCHI wrote:
Hello,
At Tue, 21 Jul 2015 14:28:25 +0300, Ildus Kurbangaliev
Hello,
At Tue, 21 Jul 2015 14:28:25 +0300, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote in 55ae2cd9.4050...@postgrespro.ru
On 07/21/2015 01:18 PM, Andres Freund wrote:
On 2015-07-21 13:11:36 +0300, Ildus Kurbangaliev wrote:
/*
* Top-level transactions are identified by
Hello,
At Wed, 22 Jul 2015 17:50:35 +0300, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote in 55afadbb.9090...@postgrespro.ru
On 07/22/2015 09:10 AM, Kyotaro HORIGUCHI wrote:
Hello,
At Tue, 21 Jul 2015 14:28:25 +0300, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote in
Hi, I forgot to mention a significant point.
At Wed, 22 Jul 2015 17:50:35 +0300, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote in 55afadbb.9090...@postgrespro.ru
On 07/22/2015 09:10 AM, Kyotaro HORIGUCHI wrote:
Hello,
At Tue, 21 Jul 2015 14:28:25 +0300, Ildus Kurbangaliev
On 23 June 2015 at 05:37, Robert Haas robertmh...@gmail.com wrote:
When a PostgreSQL system wedges, or when it becomes dreadfully slow
for some reason, I often find myself relying on tools like strace,
gdb, or perf to figure out what is happening. This doesn't tend to
instill customers with
On 07/22/2015 09:10 AM, Kyotaro HORIGUCHI wrote:
Hello,
At Tue, 21 Jul 2015 14:28:25 +0300, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote in 55ae2cd9.4050...@postgrespro.ru
On 07/21/2015 01:18 PM, Andres Freund wrote:
On 2015-07-21 13:11:36 +0300, Ildus Kurbangaliev wrote:
/*
Hello.
I did some refactoring to previous patch. Improvements:
1) Wait is determined by class and event without affecting to atomic
usage of it.
They are still stored in one variable. This improvement gives an
opportunity to make
more detailed views later (waits can be grouped by class).
2)
On 2015-07-21 13:11:36 +0300, Ildus Kurbangaliev wrote:
/*
* Top-level transactions are identified by VirtualTransactionIDs comprising
diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h
index cff3b99..55b0687 100644
--- a/src/include/storage/lwlock.h
+++
On 07/21/2015 01:18 PM, Andres Freund wrote:
On 2015-07-21 13:11:36 +0300, Ildus Kurbangaliev wrote:
/*
* Top-level transactions are identified by VirtualTransactionIDs comprising
diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h
index cff3b99..55b0687 100644
---
On Fri, Jul 17, 2015 at 6:05 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Peter Geoghegan p...@heroku.com writes:
I've heard that clock_gettime() with CLOCK_REALTIME_COARSE, or with
CLOCK_MONOTONIC_COARSE can have significantly lower overhead than
gettimeofday().
It can, but it also has *much*
On Jul 14, 2015, at 5:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I really think we should do the simple thing first. If we make this
complicated and add lots of bells and whistles, it is going to be much
harder to get anything committed, because
Ildus Kurbangaliev i.kurbangal...@postgrespro.ru writes:
I made benchmark of gettimeofday(). I believe it is certainly usable for
monitoring.
Testing configuration:
24 cores, Intel Xeon CPU X5675@3.07Ghz
RAM 24 GB
54179703 - microseconds total
2147483647 - (INT_MAX), the number of
On Thu, Jul 16, 2015 at 10:54 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Ildus Kurbangaliev i.kurbangal...@postgrespro.ru writes:
I made benchmark of gettimeofday(). I believe it is certainly usable for
monitoring.
Testing configuration:
24 cores, Intel Xeon CPU X5675@3.07Ghz
RAM 24 GB
On Tue, Jul 14, 2015 at 7:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
FWIW, I entirely share Robert's opinion that adding gettimeofday()
overhead in routinely-taken paths is likely not to be acceptable.
I think that it can depend on many factors. For example, the
availability of vDSO support on
Peter Geoghegan p...@heroku.com writes:
I've heard that clock_gettime() with CLOCK_REALTIME_COARSE, or with
CLOCK_MONOTONIC_COARSE can have significantly lower overhead than
gettimeofday().
It can, but it also has *much* lower precision, typically 1ms or so.
regards,
On Fri, Jul 10, 2015 at 12:33 PM, Alexander Korotkov
a.korot...@postgrespro.ru wrote:
I can propose following:
1) Expose more information about current lock to user. For instance, having
duration of current wait event, user can determine if backend is getting
stuck on particular event without
Robert Haas robertmh...@gmail.com writes:
I really think we should do the simple thing first. If we make this
complicated and add lots of bells and whistles, it is going to be much
harder to get anything committed, because there will be more things
for somebody to object to. If we start with
On Mon, Jul 13, 2015 at 3:26 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
On 07/12/2015 06:53 AM, Amit Kapila wrote:
For having duration, I think you need to use gettimeofday or some
similar call to calculate the wait time, now it will be okay for the
cases where wait time
On 07/13/2015 01:36 PM, Amit Kapila wrote:
I have already proposed something very similar in this thread [1]
(where instead of class, I have used wait_event_type) to which
Robert doesn't agree, so here I think before writing code, it seems
prudent to get an agreement about what kind of
On Mon, Jul 13, 2015 at 9:19 PM, Ildus Kurbangaliev
i.kurbangal...@postgrespro.ru wrote:
On 07/13/2015 01:36 PM, Amit Kapila wrote:
Other problem of pg_stat_activity that we can not see all processes there
(checkpointer for example). So we anyway need separate view for monitoring
purposes.
On Tue, Jul 7, 2015 at 6:28 AM, Kyotaro HORIGUCHI
horiguchi.kyot...@lab.ntt.co.jp wrote:
Please forgive me to resend this message for some too-sad
misspellings.
# Waiting for heavy weight locks is somewhat confusing to spell..
===
Hello,
At Tue, 7 Jul 2015 16:27:38 +0900, Fujii Masao
On Mon, Jul 6, 2015 at 10:48 AM, Fujii Masao masao.fu...@gmail.com wrote:
According to his patch, the wait events that he was thinking to add were:
+ typedef enum PgCondition
+ {
+ PGCOND_UNUSED= 0,/* unused */
+
+ /* 1 - CPU */
+ PGCOND_CPU
On Fri, Jul 10, 2015 at 10:03 PM, Alexander Korotkov
a.korot...@postgrespro.ru wrote:
On Fri, Jun 26, 2015 at 6:39 AM, Robert Haas robertmh...@gmail.com
wrote:
On Thu, Jun 25, 2015 at 9:23 AM, Peter Eisentraut pete...@gmx.net
wrote:
On 6/22/15 1:37 PM, Robert Haas wrote:
Currently, the
On Fri, Jun 26, 2015 at 6:39 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 25, 2015 at 9:23 AM, Peter Eisentraut pete...@gmx.net wrote:
On 6/22/15 1:37 PM, Robert Haas wrote:
Currently, the only time we report a process as waiting is when it is
waiting for a heavyweight lock.
On Tue, Jul 7, 2015 at 12:57 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Tue, Jun 30, 2015 at 10:30 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
I have still not added documentation and have not changed anything for
waiting column in pg_stat_activity as I think before that we need to
On Tue, Jun 30, 2015 at 10:30 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Jun 26, 2015 at 6:26 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 25, 2015 at 11:57 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
3. Add new view 'pg_stat_wait_event' with following info:
Hello,
At Tue, 7 Jul 2015 16:27:38 +0900, Fujii Masao masao.fu...@gmail.com wrote in
CAHGQGwEJwov8YwvmbbWps3Rba6kF1yf7qL3S==Oy4D=gq9y...@mail.gmail.com
Each backend reports its event when trying to take a lock. But
the reported event is never reset until next event is reported.
Is this OK?
Please forgive me to resend this message for some too-sad
misspellings.
# Waiting for heavy weight locks is somewhat confusing to spell..
===
Hello,
At Tue, 7 Jul 2015 16:27:38 +0900, Fujii Masao masao.fu...@gmail.com wrote in
CAHGQGwEJwov8YwvmbbWps3Rba6kF1yf7qL3S==Oy4D=gq9y...@mail.gmail.com
On Fri, Jun 26, 2015 at 12:39 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 25, 2015 at 9:23 AM, Peter Eisentraut pete...@gmx.net wrote:
On 6/22/15 1:37 PM, Robert Haas wrote:
Currently, the only time we report a process as waiting is when it is
waiting for a heavyweight lock. I'd
On Fri, Jun 26, 2015 at 6:26 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 25, 2015 at 11:57 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
3. Add new view 'pg_stat_wait_event' with following info:
pid - process id of this backend
waiting - true for any form of wait, false
On 6/25/15 11:39 PM, Robert Haas wrote:
Could that also cover waiting on network?
Possibly. My approach requires that the number of wait states be kept
relatively small, ideally fitting in a single byte. And it also
requires that we insert pgstat_report_waiting() calls around the thing
On Thu, Jun 25, 2015 at 11:57 PM, Amit Kapila amit.kapil...@gmail.com wrote:
3. Add new view 'pg_stat_wait_event' with following info:
pid - process id of this backend
waiting - true for any form of wait, false otherwise
wait_event_type - Heavy Weight Lock, Light Weight Lock, I/O wait,
On Thu, Jun 25, 2015 at 4:28 PM, Ilya Kosmodemiansky
ilya.kosmodemian...@postgresql-consulting.com wrote:
On Thu, Jun 25, 2015 at 12:37 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
2. Add 2 new columns to pg_stat_activity
waiting_resource - true for waits other heavy wait locks, false
On 2015-06-25 16:07:45 +0530, Amit Kapila wrote:
On Tue, Jun 23, 2015 at 2:33 AM, Robert Haas robertmh...@gmail.com wrote:
If people feel strongly about backward compatibility, yes, we can do
that. However, if waiting continues to mean on a heavyweight lock
for backward compatibility, then
On Thu, Jun 25, 2015 at 1:49 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Personally I think, that tracking waits is a not a good idea for
pg_stat_activity (at least in that straight-forward manner).
As mentioned in the initial mail by Robert, that sometimes system becomes
slow (either due
On Tue, Jun 23, 2015 at 2:33 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jun 22, 2015 at 4:40 PM, Merlin Moncure mmonc...@gmail.com
wrote:
Instead of changing the column, can't we add a new one? Adjusting
columns in PSA requires the innumerable queries written against it to
be
On Thu, Jun 25, 2015 at 4:16 PM, Andres Freund and...@anarazel.de wrote:
On 2015-06-25 16:07:45 +0530, Amit Kapila wrote:
On Tue, Jun 23, 2015 at 2:33 AM, Robert Haas robertmh...@gmail.com
wrote:
If people feel strongly about backward compatibility, yes, we can do
that. However, if
On 2015-06-25 16:26:39 +0530, Amit Kapila wrote:
Won't leaving former contents as it is (until the next thing is being
blocked) could give misleading information. Currently we mark 'waiting'
as false as soon as Heavy Weight Lock is over, so following that way
sounds more appropriate, is there
Hi all
On Thu, Jun 25, 2015 at 12:37 PM, Amit Kapila amit.kapil...@gmail.com wrote:
2. Add 2 new columns to pg_stat_activity
waiting_resource - true for waits other heavy wait locks, false
otherwise
wait_event - description code for the wait event
3.
On Thu, Jun 25, 2015 at 6:10 PM, Ilya Kosmodemiansky
ilya.kosmodemian...@postgresql-consulting.com wrote:
On Thu, Jun 25, 2015 at 1:49 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Personally I think, that tracking waits is a not a good idea for
pg_stat_activity (at least in that
On Thu, Jun 25, 2015 at 4:28 PM, Andres Freund and...@anarazel.de wrote:
On 2015-06-25 16:26:39 +0530, Amit Kapila wrote:
Won't leaving former contents as it is (until the next thing is being
blocked) could give misleading information. Currently we mark 'waiting'
as false as soon as Heavy
On 6/22/15 1:37 PM, Robert Haas wrote:
Currently, the only time we report a process as waiting is when it is
waiting for a heavyweight lock. I'd like to make that somewhat more
fine-grained, by reporting the type of heavyweight lock it's awaiting
(relation, relation extension, transaction,
Andres Freund and...@anarazel.de writes:
On 2015-06-25 16:26:39 +0530, Amit Kapila wrote:
Won't leaving former contents as it is (until the next thing is being
blocked) could give misleading information. Currently we mark 'waiting'
as false as soon as Heavy Weight Lock is over, so following
On 2015-06-25 10:01:39 -0400, Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
On 2015-06-25 16:26:39 +0530, Amit Kapila wrote:
Won't leaving former contents as it is (until the next thing is being
blocked) could give misleading information. Currently we mark 'waiting'
as false
Andres Freund and...@anarazel.de writes:
On 2015-06-25 10:01:39 -0400, Tom Lane wrote:
The problem with the query analogy is that it's possible to tell whether
the query is active or not, by looking at the status column. We need to
avoid a situation where you can't tell if the wait status is
On Thu, Jun 25, 2015 at 6:58 AM, Ilya Kosmodemiansky
ilya.kosmodemian...@postgresql-consulting.com wrote:
1. Some sort of histogram of top waits within entire database by pid.
That will be an approximate one, because I hardly believe there is a
possibility to make a precise one without
On Thu, Jun 25, 2015 at 9:23 AM, Peter Eisentraut pete...@gmx.net wrote:
On 6/22/15 1:37 PM, Robert Haas wrote:
Currently, the only time we report a process as waiting is when it is
waiting for a heavyweight lock. I'd like to make that somewhat more
fine-grained, by reporting the type of
On Fri, Jun 26, 2015 at 9:01 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 25, 2015 at 6:46 AM, Andres Freund and...@anarazel.de wrote:
1. Remove/Change 'waiting' in pg_stat_activity and break the backward
compatibility. I think we should try to avoid going via this route.
2.
On Thu, Jun 25, 2015 at 6:46 AM, Andres Freund and...@anarazel.de wrote:
1. Remove/Change 'waiting' in pg_stat_activity and break the backward
compatibility. I think we should try to avoid going via this route.
2. Add 2 new columns to pg_stat_activity
waiting_resource - true for waits
On Thu, Jun 25, 2015 at 8:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@anarazel.de writes:
On 2015-06-25 10:01:39 -0400, Tom Lane wrote:
The problem with the query analogy is that it's possible to tell
whether
the query is active or not, by looking at the status column.
When a PostgreSQL system wedges, or when it becomes dreadfully slow
for some reason, I often find myself relying on tools like strace,
gdb, or perf to figure out what is happening. This doesn't tend to
instill customers with confidence; they would like (quite
understandably) a process that
On 06/22/2015 10:37 AM, Robert Haas wrote:
I'm less sure about this next part, but I think we
might also want to report ourselves as waiting when we are doing an OS
read or an OS write, because it's pretty common for people to think
that a PostgreSQL bug is to blame when in fact it's the
On Mon, Jun 22, 2015 at 12:37 PM, Robert Haas robertmh...@gmail.com wrote:
When a PostgreSQL system wedges, or when it becomes dreadfully slow
for some reason, I often find myself relying on tools like strace,
gdb, or perf to figure out what is happening. This doesn't tend to
instill
On Mon, Jun 22, 2015 at 4:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
Instead of changing the column, can't we add a new one? Adjusting
columns in PSA requires the innumerable queries written against it to
be adjusted along with all the wiki instructions to dev ops for
emergency stuck
On Mon, Jun 22, 2015 at 4:09 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jun 22, 2015 at 1:59 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
In addition to the codes themselves I think it would aid less-experienced
operators if we would provide a meta-data categorization
On 6/22/15 12:37 PM, Robert Haas wrote:
It's
not my goal here to create some kind of a performance counter system,
even though that would be valuable and could possibly be based on the
same infrastructure, but rather just to create a very simple system
that lets people know, without any
Merlin Moncure mmonc...@gmail.com writes:
On Mon, Jun 22, 2015 at 12:37 PM, Robert Haas robertmh...@gmail.com wrote:
... The basic idea is that pg_stat_activity.waiting would be
replaced by a new column pg_stat_activity.wait_event, which would
display the reason why that backend is waiting.
On Mon, Jun 22, 2015 at 1:37 PM, Robert Haas robertmh...@gmail.com wrote:
and doesn't require a developer to
interpret the results,
[...]
We could
also invent codes for things like I'm doing a pg_usleep because I've
exceeded max_spins_per_delay and I'm waiting for a cleanup lock on a
101 - 172 of 172 matches
Mail list logo