Clark Slater wrote:
> hmm, i'm baffled. i simplified the query
> and it is still taking forever...
>
>
> test
> -
> id| integer
> partnumber| character varying(32)
> productlistid | integer
> typeid| integer
>
>
> Indexes:
> "test_p
Clark Slater wrote:
Query should return 132,528 rows.
O.k. then the planner is doing fine it looks like. The problem is you
are pulling 132,528 rows. I would suggest moving to a cursor which will
allow you to fetch in smaller chunks much quicker.
Sincerely,
Joshua D. Drake
vbp=# set ena
Query should return 132,528 rows.
vbp=# set enable_seqscan = false;
SET
vbp=# explain analyze select * from test where (productlistid=3 and typeid=9);
QUERY PLAN
Index Scan using test_typeid on t
Clark Slater wrote:
thanks for your suggestion.
a small improvement. still pretty slow...
vbp=# alter table test alter column productlistid set statistics 150;
ALTER TABLE
vbp=# alter table test alter column typeid set statistics 150;
ALTER TABLE
vbp=# explain analyze select * from test where (
Clark Slater wrote:
thanks for your suggestion.
a small improvement. still pretty slow...
vbp=# alter table test alter column productlistid set statistics 150;
ALTER TABLE
vbp=# alter table test alter column typeid set statistics 150;
ALTER TABLE
vbp=# explain analyze select * from test where (
thanks for your suggestion.
a small improvement. still pretty slow...
vbp=# alter table test alter column productlistid set statistics 150;
ALTER TABLE
vbp=# alter table test alter column typeid set statistics 150;
ALTER TABLE
vbp=# explain analyze select * from test where (productlistid=3 and t
Clark Slater wrote:
hmm, i'm baffled. i simplified the query
and it is still taking forever...
What happens if you:
alter table test alter column productlistid set statistics 150;
alter table test alter column typeid set statistics 150;
explain analyze select * from test where (productlistid=
hmm, i'm baffled. i simplified the query
and it is still taking forever...
test
-
id| integer
partnumber| character varying(32)
productlistid | integer
typeid| integer
Indexes:
"test_productlistid" btree (productlistid)
"test_typei
On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:
> Hi-
>
> Would someone please enlighten me as
> to why I'm not seeing a faster execution
> time on the simple scenario below?
Because you need to extract a huge number of rows via a seqscan, sort
them and then throw them away, I think
With your current (apparently well-normalized) schema, I don't see how
you can get a better query plan than that. There may be something you
can do in terms of memory configuration to get it to execute somewhat
faster, but the only way to make it really fast is to de-normalize.
This is something
Clark Slater wrote:
> Hi-
>
> Would someone please enlighten me as
> to why I'm not seeing a faster execution
> time on the simple scenario below?
>
> there are 412,485 rows in the table and the
> query matches on 132,528 rows, taking
> almost a minute to execute. vaccuum
> analyze was just run.
[Clark Slater - Fri at 01:45:05PM -0400]
> Would someone please enlighten me as
> to why I'm not seeing a faster execution
> time on the simple scenario below?
Just some thoughts from a novice PG-DBA .. :-)
My general experience is that PG usually prefers sequal scans to indices if
a large portio
Hi-
Would someone please enlighten me as
to why I'm not seeing a faster execution
time on the simple scenario below?
there are 412,485 rows in the table and the
query matches on 132,528 rows, taking
almost a minute to execute. vaccuum
analyze was just run.
Thanks!
Clark
test
---
On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:
> Indexes:
> "test_id" btree (id)
> "test_plid" btree (productlistid)
> "test_typeid" btree (typeid)
> "test_plidtypeid" btree (productlistid, typeid)
>
>
> explain analyze select * from test where productlistid=3 and typeid=9
> order
Hi,
At 18:10 10/06/2005, [EMAIL PROTECTED] wrote:
tle-bu=> EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM
file_info_7;
What could the index be used for? Unless you have some WHERE or (in some
cases) ORDER BY clause, there's absolutely no need for an index, since you
are ju
[EMAIL PROTECTED] - Fri at 12:10:19PM -0400]
> tle-bu=> EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM
> file_info_7;
> QUERY PLAN
> ---
Hi all,
I have an index on a table that doesn't seem to want to be used. I'm
hopig someone might be able to help point me in the right direction.
My index is (typed, not copied):
tle-bu=> \d file_info_7_display_idx;
Index "public.file_info_7_display_idx"
Column | Type
-
Richard,
thanks for info.
"...the RH supplied Postgres binary has issues..."
Would you have the time to provide a bit more info?
Version of PG? Nature of issues? Methods that resolved?
Thanks again,
-- Ross
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Beh
I will second the nod to Penguin computing. We have a bit of Penguin
hardware here (though the majority is Dell). We did have issues with
one machine a couple of years ago, but Penguin was very pro-active in
addressing that.
We recently picked up a Dual Opteron system from them and have been ver
On Thu, Jun 09, 2005 at 18:26:09 -0700,
Junaili Lie <[EMAIL PROTECTED]> wrote:
> Hi Bruno,
> I followed your suggestion.
> The query plan shows that it uses the index (id, person_id). However,
> the execution time is still slow. I have to do ctl-C to stop it.
> Maybe something is wrong with my po
Hmmm. In my configuration there are not much more performance:
The Dump-size is 6-7GB on a PIV-3Ghz, 2GB-RAM, 4x10k disks on raid 10
for the db and 2x10k disks raid 1 for the system and the wal-logs.
open_sync:
real79m1.980s
user25m25.285s
sys 1m20.112s
fsync:
real75m23.792s
user
Michal Taborsky wrote:
I managed, by extensive usage of temporary tables, to totally bloat
pg_attribute. It currently has about 4 pages with just 3000 tuples.
The only thing I could think of is VACUUM FULL, but from my former
experience I guess it'll take maybe over an hour, effectively r
I managed, by extensive usage of temporary tables, to totally bloat
pg_attribute. It currently has about 4 pages with just 3000 tuples.
The question is, how to restore it to it's former beauty? With ordinary
table I'd just CLUSTER it, but alas! I cannot do that with system
catalog. I always
[Junaili Lie - Thu at 06:26:09PM -0700]
> Hi Bruno,
> I followed your suggestion.
> The query plan shows that it uses the index (id, person_id). However,
> the execution time is still slow. I have to do ctl-C to stop it.
What is the estimate planner cost?
> Maybe something is wrong with my postgr
Yann Michel wrote:
Hi,
On Thu, Jun 09, 2005 at 02:11:22PM +0100, Richard Huxton wrote:
To my question: I found the parameter "stats_reset_on_server_start"
which is set to true by default. Why did you choose this (and not false)
and what are the impacts of changeing it to false? I mean, as long
25 matches
Mail list logo