"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> >-> Seq Scan on rules
> > (cost=0.00..22296.32 rows=11294 width=12)
> > (actual time=540.149..2047.308 rows=1 loops=1)
> Simple, the planner is choosing a sequential scan when it should be
> choosing an index scan. This is u
On Mon, 2004-06-07 at 15:45, Dan Langille wrote:
> A production system has had a query recently degrade in performance.
> What once took < 1s now takes over 1s. I have tracked down the
> problem to a working example.
What changes have you made to postgresql.conf?
Could you send explain analys
On Mon, 2004-06-07 at 16:12, Dan Langille wrote:
> On 7 Jun 2004 at 16:00, Rod Taylor wrote:
>
> > On Mon, 2004-06-07 at 15:45, Dan Langille wrote:
> > > A production system has had a query recently degrade in performance.
> > > What once took < 1s now takes over 1s. I have tracked down the
>
I'm having a performance issue that I just can't resolve and its very,
very curious. Thought someone here might be able to shed some light on
the subject.
I'm using Postgres 7.4.2 on Red Hat 9. I have a table with 763,809 rows
in it defined as follows ...
ksedb=# \d nrgfeature
T
It seems to believe that the number of rows returned for the >49.999
case will be 4 times the number for the >50 case. If that was true, then
the sequential scan would be correct.
ALTER TABLE ALTER COLUMN diagonalsize SET STATISTICS 1000;
ANALZYE ;
Send back EXPLAIN ANALYZE output for the >49.99
Thanks Rod,
This setting has no effect however. If I set statistics to 1000, or
even 0, (and then reanalyze the table) I see no change in the behaviour
of the query plans. i.e. there is still the odd transtion in the plans
at diagonalSize = 50.
Ken
On Wed, 2004-06-09 at 13:12, Rod Taylor wro
On Wed, 2004-06-09 at 16:50, ken wrote:
> Thanks Rod,
>
> This setting has no effect however. If I set statistics to 1000, or
Okay.. but you never did send EXPLAIN ANALYZE output. I want to know
what it is really finding.
> On Wed, 2004-06-09 at 13:12, Rod Taylor wrote:
> > It seems to believe
On Wed, 2004-06-09 at 13:56, Rod Taylor wrote:
> On Wed, 2004-06-09 at 16:50, ken wrote:
> > Thanks Rod,
> >
> > This setting has no effect however. If I set statistics to 1000, or
>
> Okay.. but you never did send EXPLAIN ANALYZE output. I want to know
> what it is really finding.
Ah, sorry, m
> ... and here is the plan with statistics set to 1000 ...
>
> Seq Scan on nrgfeature f (cost=0.00..31675.57 rows=18608 width=218)
> (actual time=63.544..1002.701 rows=225 loops=1)
>Filter: ((upperrightx > 321264.236977215::double precision) AND
> (lowerleftx < 324046.799812083::double preci
I had already tried setting the statistics to 1000 for all five of these
double precision fields with effectively no improvement. Should have
mentioned that.
Also the between makes all values for diagonalSize bad since it is
effectively doing digonalSize > X and diagonalSize < Y. If I do a query
On Tue, 24 Feb 2004, Ivan Voras wrote:
> -> Nested Loop (cost=1.04..788.76 rows=3 width=500) (actual
> time=4078.85..20185.89 rows=38999 loops=1)
> -> Nested Loop (cost=1.04..771.27 rows=3 width=485)
> (actual time=4078.71..14673.27 rows=38999 loops=1)
>
If I take away the diagonalSize condition in my query I find that there
are 225 rows that satisfy the other conditions. 155 of these have a
diagonalSize value of exactly 50.000, while the remaining 70 rows all
have values larger than 50. Thus there is a big discrete jump in the
number of rows at
On Wed, 2004-06-09 at 21:45, Christopher Kings-Lynne wrote:
> > If I take away the diagonalSize condition in my query I find that there
> > are 225 rows that satisfy the other conditions. 155 of these have a
> Maybe you should drop your random_page_cost to something less than 4,
> eg. 3 or even
For the disks part - I am looking at a SAN implementation, and I will be
planning a separate RAID group for the WALs.
The controller is a PERC, with 128MB cache, and I think it is writeback.
Other than the disks, I am curious what other people are using in terms
of the horsepower needed. The Quad
Rod Taylor wrote:
The big problem is a very poor estimate (off by a couple orders of
magnitude). I was hoping someone with more knowledge in fixing those
would jump in.
ANALYZE might be producing poor stats due to :
i) many dead tuples or
ii) high proportion of dead tuples in the first few page
ANALYZE might be producing poor stats due to :
i) many dead tuples or
ii) high proportion of dead tuples in the first few pages of the table
Does a VACUUM FULL followed by ANALYZE change the estimates (or have
you tried this already)?
(p.s. - I probably don't qualify for the 'more knowledge' bi
ken <[EMAIL PROTECTED]> writes:
> ... and here is the plan with statistics set to 1000 ...
> Seq Scan on nrgfeature f (cost=0.00..31675.57 rows=18608 width=218)
> (actual time=63.544..1002.701 rows=225 loops=1)
>Filter: ((upperrightx > 321264.236977215::double precision) AND
> (lowerleftx <
Hi all,
If you have a DB of 'only' 13 GB and you do not expect it to grow much, it
might be advisable to have enough memory (RAM) to hold the entire DB in
shared memory (everything is cached). If you have a server with say 24 GB or
memory and can allocate 20 GB for cache, you don't care about t
Vivek,
> I did a bunch of testing with different RAID levels on a 14 disk
> array. I finally settled on this: RAID5 across 14 disks for the
> data, the OS (including syslog directory) and WAL on a RAID1 pair on
> the other channel of the same controller (I didn't want to spring for
> dual RAID c
Can you describe the vendors/components of a "cheap SAN setup?"
Thanks,
Anjan
-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 02, 2004 5:57 PM
To: Scott Marlowe
Cc: Anjan Dave; Chris Ruprecht; [EMAIL PROTECTED]; William Yu;
Postgresql Performance
Subjec
20 matches
Mail list logo