> > testinsert contains t values between '2009-08-01' and '2009-08-09', and
> > ne_id
> from 1 to 2. But only 800 out of 2 ne_id have to be read; there's no
> need for a table scan!
> > I guess this is a reflection of the poor "correlation" on ne_id; but, as I
> said, I don't really t
m_li...@yahoo.it wrote:
> testinsert contains t values between '2009-08-01' and '2009-08-09', and ne_id
> from 1 to 2. But only 800 out of 2 ne_id have to be read; there's no
> need for a table scan!
> I guess this is a reflection of the poor "correlation" on ne_id; but, as I
> said, I
Since noone replied to
http://www.mail-archive.com/pgsql-general@postgresql.org/msg133360.html, I
tried another approach:
I can't cluster the whole table every day; it would take too much (as I said,
table as 60M rows, and I have hundreds of them).
Plus, it wouldn't really make much sense: the
> But that would be a different query -- there's no
> restrictions on the
> t values in this one.
There is a restriction on the t values:
select * from idtable left outer join testinsert on id=ne_id where groupname='a
group name' and time between $a_date and $another_date
> Have you tried som
On Mon, Jul 6, 2009 at 6:32 PM, Scara Maccai wrote:
> The "best" way to read the table would still be a nested loop, but a loop on
> the
> "t" values, not on the ne_id values, since data for the same timestamp is
> "close".
But that would be a different query -- there's no restrictions on the
I have a problem with the method that PG uses to access my data.
Data into testinsert is inserted every 15 minutes.
ne_id varies from 1 to 2.
CREATE TABLE testinsert
(
ne_id integer NOT NULL,
t timestamp without time zone NOT NULL,
v integer[],
CONSTRAINT testinsert_pk PRIMARY KEY