Hi All,
Not sure if this is correct fix or not, but a bit of research :
http://archives.postgresql.org/pgsql-hackers/2001-04/msg01129.php
And offical doco's from postgres :
http://www.postgresql.org/docs/7.4/static/wal-configuration.html
Lead me to try :
wal_sync_method = open_sync
And this has
Hi,
i'm trying this too :). My Dump (IN) is about 84 minutes. Now
i'm testing how much time takes it with open_sync :). I'm
anxious about the new results :).
best regards,
pingufreak
Am Freitag, den 10.06.2005, 15:33 +0930 schrieb Steve Pollard:
Hi All,
Not sure if this is correct fix or
Yann Michel wrote:
Hi,
On Thu, Jun 09, 2005 at 02:11:22PM +0100, Richard Huxton wrote:
To my question: I found the parameter stats_reset_on_server_start
which is set to true by default. Why did you choose this (and not false)
and what are the impacts of changeing it to false? I mean, as long
[Junaili Lie - Thu at 06:26:09PM -0700]
Hi Bruno,
I followed your suggestion.
The query plan shows that it uses the index (id, person_id). However,
the execution time is still slow. I have to do ctl-C to stop it.
What is the estimate planner cost?
Maybe something is wrong with my postgresql
I managed, by extensive usage of temporary tables, to totally bloat
pg_attribute. It currently has about 4 pages with just 3000 tuples.
The question is, how to restore it to it's former beauty? With ordinary
table I'd just CLUSTER it, but alas! I cannot do that with system
catalog. I
Michal Taborsky wrote:
I managed, by extensive usage of temporary tables, to totally bloat
pg_attribute. It currently has about 4 pages with just 3000 tuples.
The only thing I could think of is VACUUM FULL, but from my former
experience I guess it'll take maybe over an hour, effectively
Hmmm. In my configuration there are not much more performance:
The Dump-size is 6-7GB on a PIV-3Ghz, 2GB-RAM, 4x10k disks on raid 10
for the db and 2x10k disks raid 1 for the system and the wal-logs.
open_sync:
real79m1.980s
user25m25.285s
sys 1m20.112s
fsync:
real75m23.792s
I will second the nod to Penguin computing. We have a bit of Penguin
hardware here (though the majority is Dell). We did have issues with
one machine a couple of years ago, but Penguin was very pro-active in
addressing that.
We recently picked up a Dual Opteron system from them and have been
Richard,
thanks for info.
...the RH supplied Postgres binary has issues...
Would you have the time to provide a bit more info?
Version of PG? Nature of issues? Methods that resolved?
Thanks again,
-- Ross
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Hi all,
I have an index on a table that doesn't seem to want to be used. I'm
hopig someone might be able to help point me in the right direction.
My index is (typed, not copied):
tle-bu= \d file_info_7_display_idx;
Index public.file_info_7_display_idx
Column | Type
[EMAIL PROTECTED] - Fri at 12:10:19PM -0400]
tle-bu= EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM
file_info_7;
QUERY PLAN
Hi,
At 18:10 10/06/2005, [EMAIL PROTECTED] wrote:
tle-bu= EXPLAIN ANALYZE SELECT file_type, file_parent_dir, file_name FROM
file_info_7;
What could the index be used for? Unless you have some WHERE or (in some
cases) ORDER BY clause, there's absolutely no need for an index, since you
are
On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:
Indexes:
test_id btree (id)
test_plid btree (productlistid)
test_typeid btree (typeid)
test_plidtypeid btree (productlistid, typeid)
explain analyze select * from test where productlistid=3 and typeid=9
order by partnumber
Hi-
Would someone please enlighten me as
to why I'm not seeing a faster execution
time on the simple scenario below?
there are 412,485 rows in the table and the
query matches on 132,528 rows, taking
almost a minute to execute. vaccuum
analyze was just run.
Thanks!
Clark
test
[Clark Slater - Fri at 01:45:05PM -0400]
Would someone please enlighten me as
to why I'm not seeing a faster execution
time on the simple scenario below?
Just some thoughts from a novice PG-DBA .. :-)
My general experience is that PG usually prefers sequal scans to indices if
a large portion
Clark Slater wrote:
Hi-
Would someone please enlighten me as
to why I'm not seeing a faster execution
time on the simple scenario below?
there are 412,485 rows in the table and the
query matches on 132,528 rows, taking
almost a minute to execute. vaccuum
analyze was just run.
Well,
With your current (apparently well-normalized) schema, I don't see how
you can get a better query plan than that. There may be something you
can do in terms of memory configuration to get it to execute somewhat
faster, but the only way to make it really fast is to de-normalize.
This is something
On Fri, Jun 10, 2005 at 01:45:05PM -0400, Clark Slater wrote:
Hi-
Would someone please enlighten me as
to why I'm not seeing a faster execution
time on the simple scenario below?
Because you need to extract a huge number of rows via a seqscan, sort
them and then throw them away, I think.
hmm, i'm baffled. i simplified the query
and it is still taking forever...
test
-
id| integer
partnumber| character varying(32)
productlistid | integer
typeid| integer
Indexes:
test_productlistid btree (productlistid)
test_typeid
Clark Slater wrote:
hmm, i'm baffled. i simplified the query
and it is still taking forever...
What happens if you:
alter table test alter column productlistid set statistics 150;
alter table test alter column typeid set statistics 150;
explain analyze select * from test where
thanks for your suggestion.
a small improvement. still pretty slow...
vbp=# alter table test alter column productlistid set statistics 150;
ALTER TABLE
vbp=# alter table test alter column typeid set statistics 150;
ALTER TABLE
vbp=# explain analyze select * from test where (productlistid=3 and
Clark Slater wrote:
thanks for your suggestion.
a small improvement. still pretty slow...
vbp=# alter table test alter column productlistid set statistics 150;
ALTER TABLE
vbp=# alter table test alter column typeid set statistics 150;
ALTER TABLE
vbp=# explain analyze select * from test where
Clark Slater wrote:
thanks for your suggestion.
a small improvement. still pretty slow...
vbp=# alter table test alter column productlistid set statistics 150;
ALTER TABLE
vbp=# alter table test alter column typeid set statistics 150;
ALTER TABLE
vbp=# explain analyze select * from test where
Query should return 132,528 rows.
vbp=# set enable_seqscan = false;
SET
vbp=# explain analyze select * from test where (productlistid=3 and typeid=9);
QUERY PLAN
Index Scan using test_typeid on
Clark Slater wrote:
Query should return 132,528 rows.
O.k. then the planner is doing fine it looks like. The problem is you
are pulling 132,528 rows. I would suggest moving to a cursor which will
allow you to fetch in smaller chunks much quicker.
Sincerely,
Joshua D. Drake
vbp=# set
25 matches
Mail list logo