to restart Pg. Once restarted we were able to do a VACUUM FULL and this
took care of the issue.
hth
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Matteo Sgalaberni
We have size and color in the product table itself. It is really an
attribute of the product. If you update the availability of the product
often, I would split out the quantity into a separate table so that you can
truncate and update as needed.
Patrick Hatcher
Development Manager Analytics
'::date))
Total runtime: 753467.847 ms
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
415-422-1610
Tom Lane
[EMAIL PROTECTED
Thanks. No foreign keys and I've been bitten by the mismatch datatypes and
checked that before sending out the message :)
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Tom Lane
Hey there folks. I'm at a loss as to how to increase the speed of this
query. It's something I need to run each day, but can't at the rate this
runs. Tables are updated 1/day and is vacuum analyzed after each load.
select ddw_tran_key, r.price_type_id, t.price_type_id
from
# (same)
#cpu_index_tuple_cost = 0.001 # (same)
#cpu_operator_cost = 0.0025 # (same)
Patrick Hatcher
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
I do mass inserts daily into PG. I drop the all indexes except my primary key and then use the COPY FROM command. This usually takes less than 30 seconds. I spend more time waiting for indexes to recreate.PatrickHatcherMacys.Com[EMAIL PROTECTED] wrote: -To: [EMAIL PROTECTED]From: Christopher
Thanks for the help.
I found the culprit. The user
had created a function within the function (
pm.pm_price_post_inc(prod.keyp_products)).
Once this was fixed the time dropped dramatically.
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
= 30
# 0 is off, in seconds
#commit_delay = 0
# range 0-10, in microseconds
#commit_siblings = 5
# range 1-1000
Patrick Hatcher
Macys.Com
I upgraded to 7.4.3 this morning and
did a vacuum full analyze on the problem table and now the indexes show
the correct number of records
Patrick Hatcher
Macys.Com
Josh Berkus [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
09/21/04 10:49 AM
To
Patrick Hatcher
[EMAIL PROTECTED]
cc
Treat [EMAIL PROTECTED]
To: Patrick Hatcher [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, September 20, 2004 11:12 PM
Subject: Re: [PERFORM] vacuum full max_fsm_pages question
On Tuesday 21 September 2004 00:01, Patrick Hatcher wrote:
Hello.
Couple of questions:
- Q1: Today I decided
Hello.
Couple of questions:
- Q1: Today I decided to do a vacuum full verbose
analyze on a large table that has been giving me slow performance. And
then I did itagain. I noticed that after each run the values in my
indexes and estimate row versionchanged. What really got me
wondering
.
544679 pages are or will become empty, including 0 at the end of the table.
692980 pages containing 4433398408 free bytes are potential move
destinations.
CPU 29.55s/4.13u sec elapsed 107.82 sec.
TIA
Patrick Hatcher
---(end of broadcast)---
TIP 8
.
Patrick Hatcher
[EMAIL PROTECTED]
omTo
Sent
shared_buffers = 2000 # min 16, at least max_connections*2, 8KB
each
sort_mem = 12288# min 64, size in KB
# - Free Space Map -
max_fsm_pages = 10 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min 100, ~50 bytes each
TIA
Patrick Hatcher
here's the URL:
http://techdocs.postgresql.org/techdocs/pgsqladventuresep2.php
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
Do you have an index on ts.bytes? Josh had suggested this and after I put
it on my summed fields, I saw a speed increase. I can't remember the
article was that Josh had written about index usage, but maybe he'll chime
in and supply the URL for his article.
hth
Patrick Hatcher
, but the
larger the recordset the slower the data is return to the client. I played
around with the cache size on the driver and found a value between 100 to
200 provided good results.
HTH
Patrick Hatcher
18 matches
Mail list logo