Just a question, what are you doing with the 20.000 result rows ?
Those rows represent monthly sales data of one item.
They are used as following:
1. Detailed sales report for month. This report can browsed in screen for
montly sales and ordering analysis.
2. Total reports. In those reports,
Andrus wrote:
> Query below seems to use indexes everywhere in most optimal way.
> dokumnr column is of type int
>
> Speed of this query varies rapidly:
>
> In live db fastest response I have got is 8 seconds.
> Re-running same query after 10 seconds may take 60 seconds.
> Re-running it again af
Richard,
At a quick glance, the plans look the same to me. The overall costs are
certainly identical. That means whatever is affecting the query times it
isn't the query plan.
So - what other activity is happening on this machine? Either other
queries are taking up noticeable resources, or some
Andrus wrote:
> Richard,
>
>> At a quick glance, the plans look the same to me. The overall costs are
>> certainly identical. That means whatever is affecting the query times it
>> isn't the query plan.
>>
>> So - what other activity is happening on this machine? Either other
>> queries are taking
OK so vmstat says you are IO-bound, this seems logical if the same plan
has widely varying timings...
Let's look at the usual suspects :
- how many dead rows in your tables ? are your tables data, or bloat ?
(check vacuum verbose, etc)
- what's the size of the dataset re
Mark Roberts napsal(a):
On Tue, 2008-11-18 at 17:38 +0100, [EMAIL PROTECTED] wrote:
I bet there is no 'critical' length - this is just another case of
index
scan vs. seqscan. The efficiency depends on the size of the table /
row,
amount of data in the table, variability of the column used in the