On Fri, 11 Jun 2010, Kenneth Marshall wrote:
If you check the archives, you will see that this is not easy
to do because of the effects of caching.
Indeed. If you were to take the value at completely face value, a modern
hard drive is capable of transferring sequential pages somewhere between
If you check the archives, you will see that this is not easy
to do because of the effects of caching. The default values
were actually chosen to be a good compromise between fully
cached in RAM and totally un-cached. The actual best value
depends on the size of your database, the size of its worki
Is there a way to determine a reasonable value for random_page_cost
via some testing with OS commands. We have several postgres databases
and determining this value on a case by case basis may not be viable
(we may have to go with the defaults)
On Fri, Jun 11, 2010 at 5:44 AM, Kenneth Marshall wr
Hi Anj,
That is an indication that your system was less correctly
modeled with a random_page_cost=2 which means that the system
will assume that random I/O is cheaper than it is and will
choose plans based on that model. If this is not the case,
the plan chosen will almost certainly be slower for
I changed random_page_cost=4 (earlier 2) and the performance issue is gone
I am not clear why a page_cost of 2 on really fast disks would perform badly.
Thank you for all your help and time.
On Thu, Jun 10, 2010 at 8:32 AM, Anj Adu wrote:
> Attached
>
> Thank you
>
>
> On Thu, Jun 10, 2010 at 6
On Thu, Jun 10, 2010 at 12:58 PM, Anj Adu wrote:
> you are right..the word "zone" was replaced by "area" (my bad )
>
> everything else is as is.
>
> Apologies for the confusion.
Well, two different people have asked you for the table and index
definitions now, and you haven't provided them... I t
you are right..the word "zone" was replaced by "area" (my bad )
everything else is as is.
Apologies for the confusion.
On Thu, Jun 10, 2010 at 9:42 AM, Robert Haas wrote:
> On Thu, Jun 10, 2010 at 11:32 AM, Anj Adu wrote:
>> Attached
>
> Hmm. Well, I'm not quite sure what's going on here, but
On Thu, Jun 10, 2010 at 11:32 AM, Anj Adu wrote:
> Attached
Hmm. Well, I'm not quite sure what's going on here, but I think you
must be using a modified verison of PostgreSQL, because, as Tom
pointed out upthread, we don't have a data type called "timestamp with
time area". It would be called "
Attached
Thank you
On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas wrote:
> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu wrote:
>> The plan is unaltered . There is a separate index on theDate as well
>> as one on node_id
>>
>> I have not specifically disabled sequential scans.
>
> Please do "SHOW ALL
On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu wrote:
> The plan is unaltered . There is a separate index on theDate as well
> as one on node_id
>
> I have not specifically disabled sequential scans.
Please do "SHOW ALL" and attach the results as a text file.
> This query performs much better on 8.1.9
The plan is unaltered . There is a separate index on theDate as well
as one on node_id
I have not specifically disabled sequential scans.
This query performs much better on 8.1.9 on a similar sized
table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )
On Wed, Jun 9, 2010 at 7:55 PM,
Robert Haas writes:
> On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu wrote:
>> Link to plan
>>
>> http://explain.depesz.com/s/kHa
> Your problem is likely related to the line that's showing up in red:
> Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on
> dev4_act_dy_fact_2010_05_t3 a (cost=0.00
On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu wrote:
> Link to plan
>
> http://explain.depesz.com/s/kHa
Your problem is likely related to the line that's showing up in red:
Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on
dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276
width=60)
Link to plan
http://explain.depesz.com/s/kHa
On Thu, Jun 3, 2010 at 11:43 AM, Andy Colson wrote:
> On 6/3/2010 12:47 PM, Anj Adu wrote:
>>
>> I cant seem to pinpoint why this query is slow . No full table scans
>> are being done. The hash join is taking maximum time. The table
>> dev4_act_action
On 6/3/2010 12:47 PM, Anj Adu wrote:
I cant seem to pinpoint why this query is slow . No full table scans
are being done. The hash join is taking maximum time. The table
dev4_act_action has only 3 rows.
box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0
1G work_mem
20G effective_
I cant seem to pinpoint why this query is slow . No full table scans
are being done. The hash join is taking maximum time. The table
dev4_act_action has only 3 rows.
box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0
1G work_mem
20G effective_cache
random_page_cost=1
default_statis
16 matches
Mail list logo