Oh my, creating an index has absolutely reduced the times it takes to
query from around 700 ms to less than 1 ms!
Thanks so much for all your help. You've saved me!
One question:
Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's
Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's improved performance
even more.
You create indexes when you need indexes. Indexes are most helpful when
they match the WHERE clause of your selects.
So, if you commonly do one query
I see - thanks very much. I created an index for column 'oid' which I
was using in a WHERE. So rule of thumb- create an index for column(s)
which I use in WHERE queries.
Thanks,
Chis
On Wed, 30 Jun 2004 15:30:52 +0800, Christopher Kings-Lynne
[EMAIL PROTECTED] wrote:
Why would I or would
I see - thanks very much. I created an index for column 'oid' which I
was using in a WHERE. So rule of thumb- create an index for column(s)
which I use in WHERE queries.
So to speak. They can also sometimes assist in sorting. The OID column
is special. I suggest adding a unique index to that
Title: RE: [PERFORM] postgres 7.4 at 100%
Creating indexes on a table affects insert performance depending on the number of indexes that have to be populated. From a query standpoint, indexes are a godsend in most cases.
Duane
-Original Message-
From: Chris Cheston [mailto:[EMAIL
Wow, this simple query is taking 676.24 ms to execute! it only takes
18 ms on our other machine.
This table has 150,000 rows. Is this normal?
no, the machine is not running software RAID. Anyone have any ideas
next as to what I should do to debug this? I'm really wondering if the
Linux OS
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
QUERY PLAN
--
Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4)
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I? The values for
calllogs(from) are not unique (sorry if I'm misunderstanding your
point).
Thanks,
Chris
On Tue, 29 Jun 2004
On Tue, Jun 29, 2004 at 01:37:30 -0700,
Chris Cheston [EMAIL PROTECTED] wrote:
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I? The values for
calllogs(from) are not unique
Chris Cheston [EMAIL PROTECTED] writes:
Wow, this simple query is taking 676.24 ms to execute! it only takes
18 ms on our other machine.
This table has 150,000 rows. Is this normal?
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
Is the from field nullable? If not, try create index calllogs_from on
calllogs ( from ); and then do an explain analyze of your query.
Gavin
Chris Cheston wrote:
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes
Tom,
So while he surely should not go back to 40, it seems there's another
factor involved here that we've not recognized yet.
I'd agree. Actually, the first thing I'd do, were it my machine, is reboot it
and run memtest86 overnight.CPU thrashing like that may indicate bad RAM.
If the
Frank,
Doug said the same, yet the PG Tuning article recommends not make this
too large as it is just temporary used by the query queue or so. (I
guess the system would benefit using more memory for file system cache)
As one of the writers of that article, let me point out:
-- Medium size
Frank,
I understand tuning PG is almost an art form, yet it should be based on
actual usage patterns, not just by system dimensions, don't you agree?
Well, it's both. It's more that available RAM determines your *upper* limit;
that is, on Linux, you don't really want to have more than 20%
On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:
I upped effective_cache to 16000 KB and I could only up the
shared_buffers to 3000. Anything more and postgres would not start.
You need to greatly incrase the shared memory max setting on your
machine so that you can use at the
Frank Knobbe [EMAIL PROTECTED] writes:
On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:
I upped effective_cache to 16000 KB and I could only up the
shared_buffers to 3000. Anything more and postgres would not start.
You need to greatly incrase the shared memory max setting on your
Chris Cheston [EMAIL PROTECTED] writes:
shared_buffers = 40 # min 16, at least max_connections*2, 8KB each
This is ridiculously low for any kind of production server. Try
something like 5000-1 for a start.
-Doug
---(end of
PROTECTED]
Date: Sat, 26 Jun 2004 07:11:49 -0700
Subject: Re: [PERFORM] postgres 7.4 at 100%
To: Doug McNaught [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Hello,
Not to mention upping your effective_cache.
Doug McNaught wrote:
Chris Cheston [EMAIL PROTECTED] writes:
shared_buffers = 40
Chris Cheston wrote:
Hi all,
I was running Postgres 7.3 and it was running at about 15% with my
application. On Postgres 7.4 on another box, it was running at 100%...
People are going to need more information. Are you talking about
CPU/disk IO/memory?
My settings are default on both boxes I
19 matches
Mail list logo