The partial index is highly leveraged. If every tuple in the
table is updated once, that amounts to every tuple in the index
being updated 25,000 times.
How so? That sounds like O(n_2) behaviour.
If the table has 5 million rows while the index has 200 (active)
Hello
please do not consider this email as an yet another question how to speed up
writing.
The situation is different:
My algorithm stores after the computation the result as tuples in a DB.
The tuples in addition to normal values (e.g. a,b) , contains sql statements
that fetch values (for
Markus,
Have you looked over here:
http://www.postgresql.org/docs/9.2/static/populate.html
From: markus.innereb...@inf.unibz.it
Subject: [PERFORM] Improve performance for writing
Date: Thu, 27 Dec 2012 14:10:40 +0100
To:
Date: Wed, 26 Dec 2012 23:03:33 -0500
Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
From: jeff.ja...@gmail.com
To: charle...@outlook.com
CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
On Monday, December 24,
On Thursday, December 20, 2012, Jeff Janes wrote:
On Thursday, December 20, 2012, Richard Neill wrote:
- Bitmap Index Scan on tbl_tracker_exit_state_idx
(cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277
loops=1)
This is finding 100 times more rows than it
On 27/12/12 16:17, Jeff Janes wrote:
I still think your best bet is to get rid of the partial index and trade
the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).
I think that will be much less fragile than reindexing in a cron job.
So, at the moment, I have 3
Richard Neill rn...@richardneill.org writes:
So, at the moment, I have 3 indexes:
full: parcel_id_code
full: exit_state
full: parcel_id_code where exit state is null
Am I right that when you suggest just a single, joint index
(parcel_id_code,exit_state)
instead of
Le 2012-12-27 à 12:10, Nikolas Everett a écrit :
We just upgraded from 8.3 to 9.1 and we're seeing some performance problems.
When we EXPLAIN ANALYZE our queries the explain result claim that the queries
are reasonably fast but the wall clock time is way way longer. Does anyone
know why
On 27/12/12 17:21, François Beausoleil wrote:
Le 2012-12-27 à 12:10, Nikolas Everett a écrit :
We just upgraded from 8.3 to 9.1 and we're seeing some performance problems.
When we EXPLAIN ANALYZE our queries the explain result claim that the queries
are reasonably fast but the wall clock
On 12/26/2012 11:03 PM, Jeff Janes wrote:
On Fri, Dec 14, 2012 at 10:40 AM, Andrew Dunstan
andrew.duns...@pgexperts.com wrote:
One of my clients has an odd problem. Every so often a backend will
suddenly
become very slow. The odd thing is that once this has happened it
remains
slowed down,
Thanks!
http://explain.depesz.com/s/yfs
Looks like we're running a load of about 6. The machines have two physical
cores hyperthreaded to 32 cores.
Interesting - the data is stored on nfs on a netapp. We don't seem to have
a ton of nfs traffic.
Also we've got shared memory set to 48 gigs
New news - the hot slave seems to be performing as expected with no long
pauses.
It looks like we're using an archive_timeout of 60 seconds and default
checkout_timeout and checkpoint_completion_target. I didn't do any of the
research on this. It seems like we're asking postgres to clear all of
Actually that last paragraph doesn't make much sense. Please ignore it.
On Thu, Dec 27, 2012 at 12:58 PM, Nikolas Everett nik9...@gmail.com wrote:
New news - the hot slave seems to be performing as expected with no long
pauses.
It looks like we're using an archive_timeout of 60 seconds and
On Thursday, December 27, 2012, Richard Neill wrote:
On 27/12/12 16:17, Jeff Janes wrote:
I still think your best bet is to get rid of the partial index and trade
the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).
I think that will be much less fragile than
On Wednesday, December 26, 2012, Pavel Stehule wrote:
2012/12/27 Jeff Janes jeff.ja...@gmail.com:
More automated would be nice (i.e. one operation to make both the check
constraints and the trigger, so they can't get out of sync), but would
not
necessarily mean faster.
snip some
On Monday, December 24, 2012, Charles Gomes wrote:
By the way, I've just re-wrote the code to target the partitions
individually and I've got almost 4 times improvement.
Shouldn't it be faster to process the trigger, I would understand if there
was no CPU left, but there is lots of cpu to
Pavel,
I've been trying to port the work of Emmanuel
http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php
His implementation is pretty straight forward. Simple trigger doing constrain
checks with caching for bulk inserts.
So far that's what I got
* Jeff Janes (jeff.ja...@gmail.com) wrote:
If the main goal is to make it faster, I'd rather see all of plpgsql get
faster, rather than just a special case of partitioning triggers. For
example, right now a CASE expression statement with 100 branches is about
the same speed as an equivalent
Nikolas Everett nik9...@gmail.com writes:
We just upgraded from 8.3 to 9.1 and we're seeing some performance
problems. When we EXPLAIN ANALYZE our queries the explain result claim
that the queries are reasonably fast but the wall clock time is way way
longer. Does anyone know why this might
2012/12/27 Stephen Frost sfr...@snowman.net:
* Jeff Janes (jeff.ja...@gmail.com) wrote:
If the main goal is to make it faster, I'd rather see all of plpgsql get
faster, rather than just a special case of partitioning triggers. For
example, right now a CASE expression statement with 100
Sorry for the confusion around the queries. Both queries are causing
trouble. We've noticed that just EXPLAINING the very simple queries takes
forever.
After more digging it looks like this table has an inordinate number
of indices (10 ish). There a whole buch of conditional indicies for other
Another other thing - the query seems to get faster after the first time we
plan it. I'm not sure that this is the case but I think it might be.
On Thu, Dec 27, 2012 at 2:28 PM, Nikolas Everett nik9...@gmail.com wrote:
Sorry for the confusion around the queries. Both queries are causing
Nikolas Everett nik9...@gmail.com writes:
After more digging it looks like this table has an inordinate number
of indices (10 ish).
10 doesn't sound like a lot.
There a whole buch of conditional indicies for other
columns that we're not checking. The particular column that is causing us
The partial index is highly leveraged. If every tuple in the
table is updated once, that amounts to every tuple in the index
being updated 25,000 times.
How so? That sounds like O(n_2) behaviour.
If the table has 5 million rows while the index has 200 (active)
Hey guys,
I recently stumbled over a Linux scheduler setting that has outright
shocked me. So, after reading through this:
http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html
it became readily apparent we were hitting the same wall. I could do a
pgbench and increase
Nikolas Everett nik9...@gmail.com writes:
We straced the backend during the explain and it looked like the open
commands were taking several seconds each.
Kind of makes me wonder if you have a whole lot of tables (whole lot
in this context probably means tens of thousands) and are storing the
On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Nikolas Everett nik9...@gmail.com writes:
We straced the backend during the explain and it looked like the open
commands were taking several seconds each.
Kind of makes me wonder if you have a whole lot of tables (whole
Nikolas Everett nik9...@gmail.com writes:
On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Nikolas Everett nik9...@gmail.com writes:
We straced the backend during the explain and it looked like the open
commands were taking several seconds each.
Kind of makes me wonder if
It looks like it was a problem with NFS. We're not really sure what was
wrong with it but once we failed over to an iSCSI mount for the data
everything is running just fine.
On Thu, Dec 27, 2012 at 6:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Nikolas Everett nik9...@gmail.com writes:
On Thu,
On Thursday, December 27, 2012, Andrew Dunstan wrote:
On 12/26/2012 11:03 PM, Jeff Janes wrote:
Do you have a huge number of tables? Maybe over the course of a
long-lived connection, it touches enough tables to bloat the relcache /
syscache. I don't know how the autovac would be involved
On Thursday, December 20, 2012, Jeff Janes wrote:
On Thursday, December 20, 2012, Tom Lane wrote:
What I did to try to duplicate Richard's situation was to create a test
table in which all the exit_state values were NULL, then build the
index, then UPDATE all but a small random fraction of
31 matches
Mail list logo