[GENERAL] Postgres slowdown on large table joins

2001-02-16 Thread Dave Edmondson
I'm having a problem here. I'm using Postgres 7.0.3 on a FreeBSD 4.2-RELEASE machine... it's a Pentium II/450 w/ 128MB of RAM (not nearly enough, but there'll be an upgrade soon). Anyway, I have a data table, which currently has around 146,000 entries, though it will grow to a few million

[GENERAL] Re: Postgres slowdown on large table joins

2001-02-19 Thread Dave Edmondson
Ack! I just timed it at 74 seconds. Added two indexes, here's the query plan... it doesn't seem to be using the indexes at all. I'm sure I'm doing something wrong here... NOTICE: QUERY PLAN: Sort (cost=6707.62..6707.62 rows=10596 width=170) - Merge Join (cost=1.34..5492.29 rows=10596

Re: [GENERAL] Re: Postgres slowdown on large table joins

2001-02-19 Thread Dave Edmondson
On Mon, Feb 19, 2001 at 12:22:11PM -0500, Tom Lane wrote: Dave Edmondson [EMAIL PROTECTED] writes: Ack! I just timed it at 74 seconds. Added two indexes, here's the query plan... it doesn't seem to be using the indexes at all. I'm sure I'm doing something wrong here... Have you done

Re: [GENERAL] Re: Postgres slowdown on large table joins

2001-02-19 Thread Dave Edmondson
yes. I ran VACUUM ANALYZE after creating the indicies. (Actually, I VACUUM the database twice a day.) The data table literally has 145972 rows, and 145971 will match conf_id 4... Hm. In that case the seqscan on data looks pretty reasonable ... not sure if you can improve on this much,

Re: [GENERAL] Re: Postgres slowdown on large table joins

2001-02-20 Thread Dave Edmondson
On Mon, Feb 19, 2001 at 08:34:47PM -0600, Larry Rosenman wrote: * Dave Edmondson [EMAIL PROTECTED] [010219 14:40]: yes. I ran VACUUM ANALYZE after creating the indicies. (Actually, I VACUUM the database twice a day.) The data table literally has 145972 rows, and 145971 will match