Probably my best solution is to find a better way to produce the
information, or cache it on the
application side, as it doesn't actually change that much across client
sessions.
Clustering it occurred to me - it would have to be done on a frequent
basis, as the contents
of the table change con
Andrew Rawnsley <[EMAIL PROTECTED]> writes:
> I have a situation that is giving me small fits, and would like to see
> if anyone can shed any light on it.
In general, pulling 10% of a table *should* be faster as a seqscan than
an indexscan, except under the most extreme assumptions about cluster
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Andrew Rawnsley) would
write:
> I would like, of course, for it to use the index, given that it
> takes 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do
> anything until I exceed 0.5, which strikes me as a bit high (though
> ple
Low (1000). I'll fiddle with that. I just noticed that the machine only
has 512MB of ram in it, and not 1GB. I must
have raided it for some other machine...
On Jan 11, 2004, at 10:50 PM, Dennis Bjorklund wrote:
On Sun, 11 Jan 2004, Andrew Rawnsley wrote:
20-25% of the time. Fiddling with CPU_TU
On Sun, 11 Jan 2004, Andrew Rawnsley wrote:
> 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do anything
> until I exceed 0.5, which strikes me as a bit high (though please
> correct me if I am assuming too much...). RANDOM_PAGE_COST seems to have
> no effect.
What about the effective c
I have a situation that is giving me small fits, and would like to see
if anyone can shed any light on it.
I have a modest table (@1.4 million rows, and growing), that has a
variety of queries run against it. One is
a very straightforward one - pull a set of distinct rows out based on
two co