Andrew,
> I'm running and INSERT INTO ... SELECT query with this join (one record
> added per record in join), 4 hours down and all I have to show for it is
> 100 recycled transaction logs. ?
>
> If it ever returns, I can post the Explain output.
How about giving us the query and the regular EXPL
On Saturday 21 February 2004 16:18, John Siracusa wrote:
John,
> Next, thanks to my earlier thread, I clustered the table on the date
> column and then "SET STATISTICS" on the date column to be 100. That
> did the trick, and I stopped explicitly disabling seqscan.
100? Are you sure you don't me
On 2/22/04 2:05 PM, Josh Berkus wrote:
> On Saturday 21 February 2004 16:18, John Siracusa wrote:
>> Next, thanks to my earlier thread, I clustered the table on the date
>> column and then "SET STATISTICS" on the date column to be 100. That
>> did the trick, and I stopped explicitly disabling seqs
On 2/22/04 5:06 PM, Tom Lane wrote:
> John Siracusa <[EMAIL PROTECTED]> writes:
>> I want to do something that will convince Postgres that using the date
>> index is, by far, the best plan when running my queries, even when the
>> date column correlation stat drops well below 1.0.
>
> Have you tri
Sean Shanny <[EMAIL PROTECTED]> writes:
> New results with the above changes: (Rather a huge improvement!!!)
> Thanks Scott. I will next attempt to make the cpu_* changes to see if
> it the picks the correct plan.
> explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
> OU
John Siracusa <[EMAIL PROTECTED]> writes:
> I think the key is to get the planner to correctly ballpark the number of
> rows in the date range.
I thought it was. What you showed was
-> Index Scan using mytable_date_idx on mytable (cost=0.00..3071.70 rows=52155
width=23) (actual time=2.610..1688
On 2/22/04 6:40 PM, Tom Lane wrote:
> John Siracusa <[EMAIL PROTECTED]> writes:
>> I think the key is to get the planner to correctly ballpark the number of
>> rows in the date range.
>
> I thought it was. What you showed was
>
> -> Index Scan using mytable_date_idx on mytable (cost=0.00..3071.7
John,
> I think the key is to get the planner to correctly ballpark the number of
> rows in the date range. If it does, I can't imagine it ever deciding to
> read 1,000,000 rows instead of 1,000 with any sane "cost" setting. I'm
> assuming the defaults are sane :)
The default for random_page_co
Tom,
We have the following setting for random page cost:
random_page_cost = 1# units are one sequential page fetch cost
Any suggestions on what to bump it up to?
We are waiting to hear back from Apple on the speed issues, so far we
are not impressed with the hardware in helping in
John Siracusa <[EMAIL PROTECTED]> writes:
> Does the planner get estimates from both plans before deciding whether or
> not to use the one that references the date index?
The rowcount estimate is made prior to the plan cost estimate, much less
the plan selection. So you'd see the same number eith
10 matches
Mail list logo