Re: [PERFORM] pl/pgsql faster than raw SQL?

2005-03-09 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 John A Meinel wrote: > That sounds more like you had bad statistics on the field1 column, which > caused postgres to switch from a seqscan to an index scan, only there > were so many rows with field1='New' that it actually would have been > faster wit

Re: [PERFORM] vacuum full, why multiple times ?

2005-03-09 Thread Gaetano Mendola
Michael Fuhr wrote: > On Wed, Mar 09, 2005 at 02:02:13AM +0100, Gaetano Mendola wrote: > > >>Basically I'm noticing that a simple vacuum full is not enough to >>shrink completelly the table: >> >># vacuum full verbose url; >>INFO: vacuuming "public.url" >>INFO: "url": found 268392 removable, 21

Re: [PERFORM] Help trying to tune query that executes 40x slower than in SqlServer

2005-03-09 Thread Hugo Ferreira
Hi, Well, I think the problem is far more complex than just joins reordering... I've restrucutred the query so that it won't use any explicit joins.Instead it now has a series of 'in (select ...)' and 'not exists (select ...)'. This actually got faster... sometimes!!! select 1, 1168, C.contx

Re: [PERFORM] vacuum full, why multiple times ?

2005-03-09 Thread Richard Huxton
Gaetano Mendola wrote: # vacuum full verbose url; INFO: vacuuming "public.url" INFO: "url": found 74 removable, 21266 nonremovable row versions in 8550 pages DETAIL: 2 dead row versions cannot be removed yet. [SNIP] INFO: "url": moved 11 row versions, truncated 8550 to 8504 pages and in the nex

Re: [PERFORM] vacuum full, why multiple times ?

2005-03-09 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Richard Huxton wrote: > Gaetano Mendola wrote: > >> >> # vacuum full verbose url; >> INFO: vacuuming "public.url" >> INFO: "url": found 74 removable, 21266 nonremovable row versions in >> 8550 pages >> DETAIL: 2 dead row versions cannot be removed

Re: [PERFORM] vacuum full, why multiple times ?

2005-03-09 Thread Tom Lane
Gaetano Mendola <[EMAIL PROTECTED]> writes: > Richard Huxton wrote: >> If page number 8549 was the one being held, I don't think vacuum can >> truncate the file. The empty space can be re-used, but the rows can't be >> moved to a lower page while a transaction is using them. > It's clear now. Not

Re: [PERFORM] vacuum full, why multiple times ?

2005-03-09 Thread Gaetano Mendola
Tom Lane wrote: > Gaetano Mendola <[EMAIL PROTECTED]> writes: > >>Richard Huxton wrote: >> >>>If page number 8549 was the one being held, I don't think vacuum can >>>truncate the file. The empty space can be re-used, but the rows can't be >>>moved to a lower page while a transaction is using them.

Re: [PERFORM] index scan on =, but not < ?

2005-03-09 Thread Jim C. Nasby
On Tue, Mar 08, 2005 at 11:20:20PM -0600, Bruno Wolff III wrote: > On Tue, Mar 08, 2005 at 22:55:19 -0600, > "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > > On Tue, Mar 08, 2005 at 10:38:21PM -0600, Bruno Wolff III wrote: > > > Not exactly. If the number of rows to be examined is on the order of 5%

[PERFORM] How to speed up tsearch2 indexing

2005-03-09 Thread Werner Bohl
All, I have a table with ~ 3 million records. I'm indexing a field holding names, no more than 200 bytes each. Indexing the resulting tsvector takes forever. It's been running now for more than 40 hours on a Linux with PG 8.01, a single Xeon & 4GB RAM. My work_mem postgresql.conf parameter is at

Re: [PERFORM] How to speed up tsearch2 indexing

2005-03-09 Thread Oleg Bartunov
On Wed, 9 Mar 2005, Werner Bohl wrote: All, I have a table with ~ 3 million records. I'm indexing a field holding names, no more than 200 bytes each. Indexing the resulting tsvector takes forever. It's been running now for more than 40 hours on a Linux with PG 8.01, a single Xeon & 4GB RAM. My wor

Re: [PERFORM] How to speed up tsearch2 indexing

2005-03-09 Thread Werner Bohl
On Wed, 2005-03-09 at 20:41 +0300, Oleg Bartunov wrote: > What's your tsearch2 configuration ? Do you use dictionaries ? > I wrote a brief explanation of tsearch2 internals > http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals > Tsearch2 is using default english configuration. No

Re: [PERFORM] How to speed up tsearch2 indexing

2005-03-09 Thread Oleg Bartunov
On Wed, 9 Mar 2005, Werner Bohl wrote: On Wed, 2005-03-09 at 20:41 +0300, Oleg Bartunov wrote: What's your tsearch2 configuration ? Do you use dictionaries ? I wrote a brief explanation of tsearch2 internals http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals Tsearch2 is using defa

Re: [PERFORM] Help trying to tune query that executes 40x slower than in SqlServer

2005-03-09 Thread Russell Smith
On Wed, 9 Mar 2005 11:08 pm, Hugo Ferreira wrote: > For example... I run the query, it takes 122seconds. Then I delete the > target tables, vacuum the database, re-run it again: 9s. But if I run > vacuum several times, and then run, it takes again 122seconds. If I > stop this 122seconds query, say,

Re: [PERFORM] index scan on =, but not < ?

2005-03-09 Thread David Brown
Assuming your system isn't starved for memory, shouldn't repeated page fetches be hitting the cache? I've also wondered about the conventional wisdom that read ahead doesn't help random reads. I may well be missing something, but *if* the OS has enough memory to cache most of the table, surely

Re: [PERFORM] index scan on =, but not < ?

2005-03-09 Thread David Brown
Jim C. Nasby wrote: Ahh, I was thinking of a high correlation factor on the index. I still question 5% though... that seems awefully low. Not really. It all depends on how many records you're packing into each page. 1% may well be the threshold for small records. Tom mentioned this in the last

Re: [PERFORM] Help trying to tune query that executes 40x slower

2005-03-09 Thread Jim Johannsen
Hugo, I think your problem is with the MRS_TRANSACTION TRANS table. It is not joining anything when declared, but later it is joining thru a LEFT JOIN of the REPL_DATA_OWNER_RSDNC table. In fact I'm not sure that this table is really needed. I would suggest rewriting your FROM clause. It

[PERFORM] Multi-line requests in COPY ... FROM STDIN

2005-03-09 Thread Mischa
I'm using a 7.4.6 Perl app that bulk-loads a table, by executing a "COPY TMP_Message FROM STDIN", then letting $dbh->func($message_text."\n", "putline") Speculation made me try catenating Several \n-terminated lines together, and making a single putline() call with that. Lo and behold, all the li

[PERFORM] What's better: Raid 0 or disk for seperate pg_xlog

2005-03-09 Thread Karim Nassar
From rom http://www.powerpostgresql.com/PerfList/ "even in a two-disk server, you can put the transaction log onto the operating system disk and reap some benefits." Context: I have a two disk server that is about to become dedicated to postgresql (it's a sun v40z running gentoo linux). What's "