On Mon, 8 Mar 2004, Chris Smith wrote:
> assetid| integer | not null default 0
> assetid| character varying(255) | not null default '0'
The types above does not match, and these are the attributes you use to
join.
--
/Dennis Björklund
--
Title: Message
Hi
all,
I've got what
should be a relatively simple join between two tables that is taking forever and
I can't work out why.
Version
7.3.4RH.
It can't be
upgraded because the system is kept in sync with RedHat Enterprise (using
up2date). Not my system otherwise I'd do
On Sun, 7 Mar 2004, David Teran wrote:
> we need to optimize / speed up a simple select:
>
> explain analyze select
> ((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))
> from job_property t0, job_property t1
> where t0.id_job_profile = 5
> and t1.id_job_profile = 6
> and t1.id_job_attri
explain analyze select
((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))
from job_property t0, job_property t1
where t0.id_job_profile = 5
and t1.id_job_profile = 6
and t1.id_job_attribute = t0.id_job_attribute
and t1.int_value < t0.int_value;
Don't bother with C function, use SQL function
Steven Butler wrote:
I've recently converted a database to use bigint for the indices. Suddenly
simple queries like
select * from new_test_result where parent_id = 2
are doing full table scans instead of using the index.
This is fixed in CVS HEAD. In the mean time, you can enclose the
integer li
Hi,
I've recently converted a database to use bigint for the indices. Suddenly
simple queries like
select * from new_test_result where parent_id = 2
are doing full table scans instead of using the index. The table has over 4
million rows, of which only 30 or so would be selected by the query.
> I have run vacuum analyze before executing the statements. I wonder now
> if there is any chance to speed this up.
Is this an active table for writes? You may want to take a look at
CLUSTER. In some circumstances, it can take an order of magnitude off
the query time by allowing less pages to b
On Sat, 6 Mar 2004, Andrew Sullivan wrote:
> > places is to ensure that the column can be indexed. Postgres, it seems,
> > refuses to insert a string that is longer than some value into an
> > indexed column, and I'll rather have such errors flagged while inserting
>
> Care to provide some detail
Hi,
we need to optimize / speed up a simple select:
explain analyze select
((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))
from job_property t0, job_property t1
where t0.id_job_profile = 5
and t1.id_job_profile = 6
and t1.id_job_attribute = t0.id_job_attribute
and t1.int_value < t0.int