Tom Lane <[EMAIL PROTECTED]> writes:
>> Thanks for the help guys! That was my problem. I actually need the
>> backup_location_rid index for a different query so I am going to keep
>> it.
>
> Well, you don't really *need* it; the two-column index on (record_id,
> backup_id) will serve perfectly
"Tyrrill, Ed" <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> This combination of indexes:
>>
>>> Indexes:
>>> "backup_location_pkey" PRIMARY KEY, btree (record_id, backup_id)
>>> "backup_location_rid" btree (record_id)
>>
>> is really just silly. You should have the pkey and
On Fri, May 18, 2007 at 02:22:52PM -0700, Tyrrill, Ed wrote:
> Total runtime: 4.951 ms
Going from 1197 seconds to 5 milliseconds. That's some sort of record in a
while, I think :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
Tom Lane <[EMAIL PROTECTED]> writes:
>
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > Secondly, it might be more efficient for the planner to choose the
> > backup_location_rid index than the combination primary key index.
>
> Oh, I'm an idiot; I didn't notice the way the index was set up.
> Ye
Scott Marlowe <[EMAIL PROTECTED]> writes:
> Secondly, it might be more efficient for the planner to choose the
> backup_location_rid index than the combination primary key index.
Oh, I'm an idiot; I didn't notice the way the index was set up. Yeah,
that index pretty well sucks for a query on bac
Tyrrill, Ed wrote:
> mdsdb=# \d backup_location
> Table "public.backup_location"
> Column | Type | Modifiers
> ---+-+---
> record_id | bigint | not null
> backup_id | integer | not null
> Indexes:
> "backup_location_pkey" PRIMARY KEY, btree (record_id, backup_
"Tyrrill, Ed" <[EMAIL PROTECTED]> writes:
> Index Scan using backup_location_pkey on backup_location
> (cost=0.00..1475268.53 rows=412394 width=8) (actual
> time=3318.057..1196723.915 rows=2752 loops=1)
>Index Cond: (backup_id = 1070)
> Total runtime: 1196725.617 ms
If we take that at face v
Tyrrill, Ed wrote:
I have a two column table with over 160 million rows in it. As the size
of the table grows queries on this table get exponentially slower. I am
using version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware
is an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5
I have a two column table with over 160 million rows in it. As the size
of the table grows queries on this table get exponentially slower. I am
using version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware
is an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5
configuration. For