Andy Colson wrote:
Oh, I knew I'd seen index usage stats someplace.
give this a run:
select * from pg_stat_user_indexes where relname = 'SuperBigTable';
http://www.postgresql.org/docs/current/static/monitoring-stats.html
-Andy
Scanned by Check Point Total Security Gateway.
Kevin Traster ktras...@freshgrillfoods.com writes:
The query plan and estimates are exactly the same, except desc has index
scan backwards instead of index scan for changes_shareschange.
Yet, actual runtime performance is different by 357x slower for the
ascending version instead of
Hi all,
We have a 4-columned table that is also split up into a TOAST table, where the
TOASTed entries are ~35KB each.
The table size is 10K records.
The table is updated at a rate of ~100 updates a minute.
During our testing we see that the table size increases substantially. When
looking at
Ofer Israeli of...@checkpoint.com writes:
During our testing we see that the table size increases substantially. When
looking at the autovacuum log, set with default configuration, it seems that
it ran for around 60 seconds (see below and note that this was a 1-minute
test, i.e. only 100
Typo: Work_mem = 32 MB
The definition for both column and index:
shareschange | numeric |
changes_shareschange btree (shareschange)
Index created using: CREATE INDEX changes_shareschange ON changes(shareschange);
The entire table is created nightly (and analyzed afterwords),
Tom Lane wrote:
Ofer Israeli of...@checkpoint.com writes:
During our testing we see that the table size increases
substantially. When looking at the autovacuum log, set with default
configuration, it seems that it ran for around 60 seconds (see below
and note that this was a 1-minute test,
On Sat, Jan 28, 2012 at 11:20 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Update: The main stored function in question and all of its sub
sub-functions were recoded to new pure sql functions.
I then stub tested the sub functions sql vs. plpgsql.
Here were the results for new sql
On Wed, Feb 8, 2012 at 1:58 PM, Kevin Traster
ktras...@freshgrillfoods.com wrote:
Typo: Work_mem = 32 MB
The definition for both column and index:
shareschange | numeric |
changes_shareschange btree (shareschange)
Index created using: CREATE INDEX changes_shareschange ON
This is not a problem with dead rows, but the index is not really
satisfying your query and the database has to look through an
indeterminate amount of rows until the 'limit 15' is satisfied. Yeah,
backwards scans are slower, especially for disk bound scans but you
also have to consider how
Kevin Traster ktras...@freshgrillfoods.com wrote:
I have indexes also on activity and mfiled (both btree) - wouldn't
the database use them? - Kevin
It will use them if they are part of the plan which had the lowest
cost when it compared the costs of all possible plans.
You haven't really
On 2/7/12 4:59 PM, Peter van Hardenberg wrote:
Per the thread from last month, I've updated the default
random_page_cost on Heroku Postgres to reduce the expected cost of a
random_page on all new databases.
This is because Heroku uses AWS storage, which has fast seeks but poor
throughput
On Wed, Feb 8, 2012 at 5:50 PM, Josh Berkus j...@agliodbs.com wrote:
On 2/7/12 4:59 PM, Peter van Hardenberg wrote:
Per the thread from last month, I've updated the default
random_page_cost on Heroku Postgres to reduce the expected cost of a
random_page on all new databases.
This is because
Having read the thread, I don't really see how I could study what a
more principled value would be.
That said, I have access to a very large fleet in which to can collect
data so I'm all ears for suggestions about how to measure and would
gladly share the results with the list.
Peter
On Wed,
On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg p...@pvh.ca wrote:
Having read the thread, I don't really see how I could study what a
more principled value would be.
Agreed. Just pointing out more research needs to be done.
That said, I have access to a very large fleet in which to can
On 08/02/12 21:15, Peter van Hardenberg wrote:
Having read the thread, I don't really see how I could study what a
more principled value would be.
That said, I have access to a very large fleet in which to can collect
data so I'm all ears for suggestions about how to measure and would
gladly
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg p...@pvh.ca wrote:
That said, I have access to a very large fleet in which to can collect
data so I'm all ears for suggestions about how to measure and would
gladly
On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg p...@pvh.ca wrote:
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg p...@pvh.ca wrote:
That said, I have access to a very large fleet in which to can collect
On Wed, Feb 8, 2012 at 7:54 PM, Peter van Hardenberg p...@pvh.ca wrote:
On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg p...@pvh.ca wrote:
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg p...@pvh.ca wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
You can execute several queries with the three different values provided
by Scott and Josh.
- SET random_page_cost = 2.0
First execution of the queries with EXPLAIN ANALYZE
- SET random_page_cost = 1.4
Second execution of the queries
On 09/02/12 00:09, Greg Sabino Mullane wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
You can execute several queries with the three different values provided
by Scott and Josh.
- SET random_page_cost = 2.0
First execution of the queries with EXPLAIN ANALYZE
- SET random_page_cost
20 matches
Mail list logo