Hi Tom,
Thanks for the help, Tom.
The major issue seems to be in the sub-selects: - Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619) Filter: (merchant_id = $0)where the estimated row count is a factor of 7 too high. If theestimated row
From: sarlav kumar [EMAIL PROTECTED]
[Tom:]
You might get some results from increasing the
statistics target for merchant_purchase.merchant_id.
Do I have to use vacuum analyze to update the statistics? If so, I have
already tried that and it doesn't seem to help.
alter table
Sarlav,
I am sorry, I am not aware of what random_page_cost is, as I am new to
Postgres. What does it signify and how do I reduce random_page_cost?
It's a parameter in your postgresql.conf file.After you test it, you will
want to change it there and reload the server (pg_ctl reload).
Hi Josh,
Can you tell me in what way it affects performance? And How do I decide what value to set for the random_page_cost? Does it depend on any other factors?
Thanks,
SaranyaJosh Berkus [EMAIL PROTECTED] wrote:
Sarlav, I am sorry, I am not aware of what random_page_cost is, as I am new to
Hi All,
I am new to Postgres.
I have a query which does not use index scan unless I force postgres to use index scan. I dont want to force postgres, unless there is no way of optimizing this query.
The query :
select m.company_name,m.approved,cu.account_no,mbt.business_name,cda.country,
sarlav kumar [EMAIL PROTECTED] writes:
I have a query which does not use index scan unless I force postgres to use
index scan. I dont want to force postgres, unless there is no way of
optimizing this query.
The major issue seems to be in the sub-selects:
- Seq Scan on