On Mon, Mar 16, 2015 at 7:24 PM, Jim Nasby jim.na...@bluetreble.com wrote:
The other thing you should consider is using TRUNCATE instead of an
un-filtered DELETE. It will both be much faster to perform and won't leave
any dead rows behind.
Yep, but it does take an ACCESS EXCLUSIVE lock. We
On 3/11/15 11:15 AM, Tom Lane wrote:
Gunnlaugur Thor Briem gunnlau...@gmail.com writes:
Yes, I think that's it: I've just realized that immediately prior to the
INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e.
the whole table is being rewritten. Then the INSERT is
On 3/11/15 10:54 AM, Gunnlaugur Thor Briem wrote:
(Even better, just make the new table not temporary, and have it replace
the former table altogether. But that's for later; requires some broader
changes in our application.)
The other thing you should consider is using TRUNCATE instead of an
Gunnlaugur Thor Briem gunnlau...@gmail.com writes:
Yes, I think that's it: I've just realized that immediately prior to the
INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e.
the whole table is being rewritten. Then the INSERT is issued ... with a
WHERE clause on
On Sat, Mar 7, 2015 at 3:44 PM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:
Another possibility is that this is part
of some large batch, and autovacuum simply did not have change to do the
work.
Yes, I think that's it: I've just realized that immediately prior to the
INSERT, in the
On 11.3.2015 18:30, Jeff Janes wrote:
On Sat, Mar 7, 2015 at 7:44 AM, Tomas Vondra
tomas.von...@2ndquadrant.com mailto:tomas.von...@2ndquadrant.com wrote:
On 7.3.2015 03:26, Jeff Janes wrote:
On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane t...@sss.pgh.pa.us
mailto:t...@sss.pgh.pa.us
On Sat, Mar 7, 2015 at 7:44 AM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:
On 7.3.2015 03:26, Jeff Janes wrote:
On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane t...@sss.pgh.pa.us
mailto:t...@sss.pgh.pa.us wrote:
But the actual query is using a seq scan, and so it would hint the
table in
On 7.3.2015 03:26, Jeff Janes wrote:
On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane t...@sss.pgh.pa.us
mailto:t...@sss.pgh.pa.us wrote:
But the actual query is using a seq scan, and so it would hint the
table in efficient sequential order, rather than hinting the table
haphazardly in index order
On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
Tomas Vondra tomas.von...@2ndquadrant.com wrote:
How would fccebe421 explain the large amount of random writes (~4MB/s
for more than an hour), reported in the initial post? And why
Tomas Vondra tomas.von...@2ndquadrant.com wrote:
On 6.3.2015 01:44, Tom Lane wrote:
Tomas Vondra tomas.von...@2ndquadrant.com writes:
On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:
- postgres version is 9.1.13
The only thing I can think of is some sort of memory exhaustion,
resulting in
Kevin Grittner kgri...@ymail.com writes:
Tomas Vondra tomas.von...@2ndquadrant.com wrote:
How would fccebe421 explain the large amount of random writes (~4MB/s
for more than an hour), reported in the initial post? And why would that
only affect the EXPLAIN and not the bare query?
But the
On 6.3.2015 01:44, Tom Lane wrote:
Tomas Vondra tomas.von...@2ndquadrant.com writes:
On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:
- postgres version is 9.1.13
The only thing I can think of is some sort of memory exhaustion,
resulting in swapping out large amounts of memory.
I'm
On Thu, Mar 5, 2015 at 4:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Tomas Vondra tomas.von...@2ndquadrant.com writes:
On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:
- postgres version is 9.1.13
The only thing I can think of is some sort of memory exhaustion,
resulting in swapping out
Tomas Vondra tomas.von...@2ndquadrant.com writes:
On 6.3.2015 01:44, Tom Lane wrote:
I'm wondering about the issue addressed by commit fccebe421 (Use
SnapshotDirty rather than an active snapshot to probe index
endpoints).
How would fccebe421 explain the large amount of random writes (~4MB/s
Hi, thanks for your follow-up questions.
- postgres version is 9.1.13
- the number of rows (in this latest instance) is 28,474,842
- I've clustered and vacuum-full-ed and analyzed this table frequently,
attempting to troubleshoot this. (Running vacuum full on the whole catalog
seems a little
Hi,
On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:
Hi, thanks for your follow-up questions.
- postgres version is 9.1.13
- the number of rows (in this latest instance) is 28,474,842
- I've clustered and vacuum-full-ed and analyzed this table frequently,
attempting to troubleshoot this.
Tomas Vondra tomas.von...@2ndquadrant.com writes:
On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:
- postgres version is 9.1.13
The only thing I can think of is some sort of memory exhaustion,
resulting in swapping out large amounts of memory.
I'm wondering about the issue addressed by commit
What could cause this? Note that there is no ANALYZE.
Can you capture pstack and/or perf report while explain hangs?
I think it should shed light on the activity of PostgreSQL.
Vladimir
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Hi, thanks for your follow-up questions.
- postgres version is 9.1.13
- the number of rows (in this latest instance) is 28,474,842
- I've clustered and vacuum-full-ed and analyzed this table frequently,
attempting to troubleshoot this. (Running vacuum full on the whole catalog
seems a little
Hi,
we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an
hour, with disk I/O utilization (percent of time device is busy) at 100%
the whole time, although I/O bandwidth is not saturated. This is on
PostgreSQL 9.1.13.
What could cause this? Note that there is no ANALYZE. Is it
Hi,
we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour,
with disk I/O utilization (percent of time device is busy) at 100% the whole
time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.
What could cause this? Note that there is no ANALYZE. Is it
21 matches
Mail list logo