> Thanks for your response. I tried doing what you suggested so that table now
> has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id ,
> data_id ); ' and I've added the INDEX > of 'CREATE INDEX
> data_area_data_id_index ON data_area USING btree (data_id );' unfortunately
> it h
Joe --
>
> From: Joe Van Dyk
>To: Greg Williamson
>Cc: "pgsql-performance@postgresql.org"
>Sent: Friday, April 5, 2013 7:56 PM
>Subject: Re: [PERFORM] slow joins?
>
>
>On Fri, Apr 5, 2013 at 6:54
-> Index Scan using index_line_items_on_product_id on
>>line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004
>>rows=2 loops=70)
>> Index Cond: (product_id = products.id)
>> -> Index Only Scan using purchased_items_li
x of queries but
most are simple. Typically a few thousand queries a second to the readonly
boxes, about the same to a beefier read / write master.
This is a slightly old pgbouncer at that ... used is a fairly basic mode.
Greg Williamson
--
Sent via pgsql-performance mailing list (pgsql-p
hope this isn't redundant.
Partitioning might work if you can create clusters that are bigger than 1 hour
-- too many partitions doesn't help.
Greg Williamson
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Thanks for this description--we have index bloat problems on a massively active
(but small) database.This may help shed light on our problems.
Sorry for top-posting--challenged email reader.
Greg W.
>
> From: Jeff Janes
>To: Strahinja Kustudić
>Cc: pgsql-per
Midge --
Sorry for top-quoting -- challenged mail.
Perhaps a difference in the stats estimates -- default_statistics_target ?
Can you show us a diff between the postgres config files for each instance ?
Maybe something there ...
Greg Williamson
>
>
; Total runtime: 7651803.073 ms
>
> But query performance don't change.
> Please help me.
Did you run an analyze on the table after building the new indexes ? The row
estimates seem to be off wildly,
although that may be a symptom of something else and not related, it is worth
Joe wrote:
> I have a very busy system that takes about 9 million inserts per day and each
> record gets
> updated at least once after the insert (all for the one same table), there
> are other tables that
> get hit but not as severely. As suspected I am having a problem with table
> bloat.
?
Are the timing results consistent over several runs ? It is possible that
caching effects are entering into the time results.
Greg Williamson
- Original Message
From: Jesper Krogh
To: pgsql-performance@postgresql.org
Sent: Fri, January 1, 2010 3:48:43 AM
Subject: [PERFORM] Message q
e may be better ways of
doing that depending on what version you are using and what you maintenance
window looks like.
HTH,
Greg W.
From: Thom Brown
To: Richard Neill
Cc: Greg Williamson ; pgsql-performance@postgresql.org
Sent: Fri, November 20, 2009 4:13:
to help.
This may be the result of caching of the desired rows, either by PostgreSQL or
by your operating system. The rollback wouldn't effect this -- the rows are
already in memory and not on disk waiting to be grabbed -- much faster on all
subsequent queries.
HTH,
Greg Williamson
Jared --
Forgive the top-posting -- a challenged reader.
I see this in the 8.4 analyze:
Merge Cond: (cli.clientid = dv118488y0.clientid)
Join Filter: ((dv118488y0.variableid = v118488y0.variableid) AND
(dv118488y0.cycleid = c1.cycleid) AND (dv118488y0.unitid = u.uni
o see what the planner
says. Put this in a transaction and roll it back if you want to leave
the data unchanged, e.g.
BEGIN;
EXPLAIN ANALYZE DELETE FROM foo WHERE pk = 1234; -- or whatever values
you'd be using
ROLLBACK;
HTH,
Greg Williamson
Senior DBA
GlobeXplorer LLC, a Digit
14 matches
Mail list logo