Hello,
I didn't try it, but following should be slightly faster:
COUNT( CASE WHEN field = x AND field y THEN true END)
intead of
SUM( CASE WHEN field = x AND field y THEN 1 ELSE 0 END)
HTH,
Marc Mamin
From: pgsql-performance-ow...@postgresql.org
Hi,
On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz
gryz...@gmail.com wrote:
On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead
scott.li...@enterprisedb.comwrote:
Do you not have an index on last_snapshot.domain_id?
that, and also try rewriting a query as JOIN. There might be
On Mon, Oct 26, 2009 at 6:05 AM, Michal J. Kubski michal.kub...@cdt.pl wrote:
On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz
gryz...@gmail.com wrote:
On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead
scott.li...@enterprisedb.comwrote:
Do you not have an index on
Tom Lane wrote:
Jesper Krogh jes...@krogh.cc writes:
Is is possible to manually set the cost for the @@ operator?
You want to set the cost for the underlying function.
alter function ts_match_vq(tsvector,tsquery) cost 500
seems to change my test-queries in a very positive way (e.g. resolve
Jesper Krogh jes...@krogh.cc writes:
According to the documentation the default cost is 1 for builin
functions and 100 for others, is this true for the ts-stuff also?
Yeah. There was some recent discussion of pushing up the default cost
for some of these obviously-not-so-cheap functions, but
Michal J. Kubski michal.kub...@cdt.pl writes:
[ function that creates a bunch of temporary tables and immediately
joins them ]
It'd probably be a good idea to insert an ANALYZE on the temp tables
after you fill them. The way you've got this set up, there is no chance
of auto-analyze correcting
Try to force a unique plan, like that:
SELECT field, field2 ...
FROM table1
WHERE field3 = 'xxx'
AND field4 = 'yyy'
AND field5 = 'zzz'
so, in that example, I need the planner to use my field4 index, but the
planner insists to use the field5, so I rewrite the query like this:
SELECT field,
Hi.
I'm currently trying to figure out why the tsearch performance seems to
vary a lot between different queryplans. I have created a sample dataset
that sort of resembles the data I have to work on.
The script that builds the dataset is at:
http://krogh.cc/~jesper/build-test.pl
and
On Mon, 2009-10-26 at 21:02 +0100, Jesper Krogh wrote:
Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.
The dataset consists of words randomized, but .. all records contains
commonterm, around 80% contains commonterm80 and so on..
my $rand = rand();
Craig Ringer wrote:
On Mon, 2009-10-26 at 21:02 +0100, Jesper Krogh wrote:
Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.
The dataset consists of words randomized, but .. all records contains
commonterm, around 80% contains commonterm80 and so on..
my
On Tue, 2009-10-27 at 06:08 +0100, Jesper Krogh wrote:
You should probably re-generate your random value for each call rather
than store it. Currently, every document with commonterm20 is guaranteed
to also have commonterm40, commonterm60, etc, which probably isn't very
realistic, and
Craig Ringer wrote:
On Tue, 2009-10-27 at 06:08 +0100, Jesper Krogh wrote:
You should probably re-generate your random value for each call rather
than store it. Currently, every document with commonterm20 is guaranteed
to also have commonterm40, commonterm60, etc, which probably isn't very
12 matches
Mail list logo