Re: [SQL] List Concatination
Josh Berkus wrote: > I have an interesting problem. For purpose of presentation to users, > I'd like to concatinate a list of VARCHAR values from a subtable. To > simplify my actual situation: > > What I'd like to be able to do is present a list of clients and their > comma-seperated contacts in paragraph form, hence: > > Client Contacts > McMurphy Assoc. Jim Henson, Pat O'Gannet, Kelley Moore > > Ross Construction Sara Vaugn, Bill Murray, Peter Frump, > Siskel Ebert > Well, basically you can use a standard join, order it and eliminate duplicate client names in the application. That's the "proper" way. But - if you don't care about the order of contacts you can define an aggregate function: create aggregate catenate(sfunc1=textcat, basetype=text, stype1=text, initcond1=''); Then group by client and catenate(firstname || ' ' || lastname) You'll want to read the CREATE AGGREGATE page in the reference manual, replace textcat with your own routine that adds a comma and you'll need a finalisation routine to strip the final trailing comma. Note that this is probably not a good idea - the ordering of the contacts will not be well-defined. When I asked about this Tom Lane was quite surprised that it worked, so no guarantees about long-term suitability. - Richard Huxton ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] perl dbd
Hi, probably some environment variables are not set... these are used by the DBD::Pg install to determine include and lib directories: POSTGRES_INCLUDE POSTGRES_LIB if you installed postgres into /opt/postgres do export POSTGRES_INCLUDE=/opt/postgres/include export POSTGRES_LIB=/opt/postgres/lib and have a try again ... Johan The best thing to do is do download the Ken Kline wrote: > my apologies if this is not the coreect list > > but I cannot seem to install the > > package DBD-Pg-0.73-1.i386.rpm > > > > it complains that it needs libpq.so.1 > > > > i have the following installed from > > a source package rebuild: > > > > postgresql-7.0.3-2 > > ...server > > ...devel > > ...perl > > ...tk > > ...odbc > > ...tcl > > > > thanks as always > > > > Ken ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[SQL] from PosgreSQL 7.1b3 to 7.0.3
I have PosgreSQL 7.1b3 running on one of our test servers. It seems like PosgreSQL 7.1b3 is not very stable. I want to go back to 7.0.3v since it it the most stable version available. I am just wondering what should I do. can I reinstall 7.0.3 on 7.1b3 directly ? If not then what steps should I take. I can always use pg_dump to dump data and stuff.. By the way 7.1b3 is crashing 3 to 4 times a week. We have 7.0.3 running on another server and it rarely crashes. Thanks in advance. Regards. begin:vcard n:Hashmi;Najm x-mozilla-html:FALSE org:Mondo-Live.com;www.flipr.com adr:;; version:2.1 email;internet:[EMAIL PROTECTED] x-mozilla-cpt:;6144 fn:Najm Hashmi end:vcard ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] cannot get CREATE TABLE AS to work
I'm sure I'm doing something wrong, and I'm hoping someone can show me the way of things. Running 7.1beta5 on an Ultra 5, Solaris 2.6 w/256Mb mem. If I remove the AS, the table creates correctly and I can do the INSERT INTO with the SELECT clause psql -d tassiv -c "\ create table observationsII ( \ ra float8 not null, \ decl float8 not null, \ mag float8 not null, \ smag float8 not null, \ obs_id serial, \ file_id int4 references files on delete cascade, \ star_id int4 references comp_loc on delete set null default null ) \ AS select o.ra, o.decl, o.mag, o.smag, o.file_id from observations o" ERROR: parser: parse error at or near "AS" And the next question, should this really be taking 3 hours to insert 315446 records? I noticed the disk is basically idle during the few times when I watched. Would this be because of the index created on obs_id? time psql -d tassiv -c "insert into observationsII( ra, decl, mag, smag, file_id ) select ra, decl, mag, smag, file_id from observations" INSERT 0 315446 0.01u 0.01s 3:13:22.39 0.0% Robert Creager Senior Software Engineer Client Server Library 303.673.2365 V 303.661.5379 F 888.912.4458 P StorageTek INFORMATION made POWERFUL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] List Concatination
Richard Huxton <[EMAIL PROTECTED]> writes: > But - if you don't care about the order of contacts you can define an > aggregate function: > create aggregate catenate(sfunc1=textcat, basetype=text, stype1=text, initcond1=''); > Then group by client and catenate(firstname || ' ' || lastname) With a custom aggregate you could make the aggregate function responsible for handling the ordering of contacts: select client, contactlist(contact) from table group by client; If I were doing this, I'd make the aggregate state variable be "array of text", and have the transition function simply append each new value to the array. (Or, if you're willing to assume that no newlines appear in the contact names, the state variable can be plain text and can list the contacts one per line.) Then the finalization function would sort the array elements and concatenate them with inserted commas. These two functions would be pretty trivial to write in pltcl or plperl, either of which are the tool of first choice for string-bashing problems. This wouldn't scale very well to huge numbers of contacts per client, but for the numbers that would be reasonable to print out as single lines of a report it should work fine. > Note that this is probably not a good idea - the ordering of the > contacts will not be well-defined. When I asked about this Tom Lane was > quite surprised that it worked, so no guarantees about long-term suitability. I don't recall the prior conversation, but certainly user-defined aggregates are not going away... regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] from PosgreSQL 7.1b3 to 7.0.3
Najm Hashmi <[EMAIL PROTECTED]> writes: > By the way 7.1b3 is crashing 3 to 4 times a week. It would be nice to have some bug reports that might allow us to fix those crashes. And no, you can't go back to 7.0 without dump/initdb/reload. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] List Concatination
Tom, Richard, Thanks for the advice, guys! This being Postgres, I *knew* there would be other options. > > create aggregate catenate(sfunc1=textcat, basetype=text, > stype1=text, initcond1=''); > > > Then group by client and catenate(firstname || ' ' || lastname) > > With a custom aggregate you could make the aggregate function > responsible for handling the ordering of contacts: > > select client, contactlist(contact) from table group by client; > > If I were doing this, I'd make the aggregate state variable be "array > of > text", and have the transition function simply append each new value > to > the array. (Or, if you're willing to assume that no newlines appear > in > the contact names, the state variable can be plain text and can list > the > contacts one per line.) Then the finalization function would sort > the > array elements and concatenate them with inserted commas. These two > functions would be pretty trivial to write in pltcl or plperl, either > of which are the tool of first choice for string-bashing problems. Hmmm... neither of these options sounds like it would be faster and more scalable than a simple PL/pgSQL function which loops throught the names and appends them to a string. Perhaps for Phase II of our project I'll be able to afford somebody to write a custom aggregate in C. -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] cannot get CREATE TABLE AS to work
Robert, I can't help you with your performance problem, but I can help you with CREATE TABLE AS. You've mistaken the syntax; CREATE TABLE AS does not use column definitions other than the query. Thus, the correct syntax should be: > create table observationsII > AS select o.ra, o.decl, o.mag, o.smag, o.file_id from > observations o; -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] cannot get CREATE TABLE AS to work
Robert, > Thanks for the pointers. I'm actually working on modifying the structure of > an existing db, so this is all within Pg. Those INSERT INTOs with SELECTs > are painfully slow, and I have an larger table to do this to... I guess > Perl will have to rescue me... Why don't you post your SQL statements, table definitions, and estimate results? I'm sure one or more of the performance experts (Tom Stephan) could find some improvements. It doesn't makes sense that CREATE TABLE AS should be faster than INSERT INTO ... SELECT. It should be *slower* unless you have 2 dozen indexes on the target table. -Josh Berkus -- __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] cannot get CREATE TABLE AS to work
"Creager, Robert S" <[EMAIL PROTECTED]> writes: > And the next question, should this really be taking 3 hours to insert 315446 > records? I noticed the disk is basically idle during the few times when I > watched. Would this be because of the index created on obs_id? Not for a single index. I can believe that the referential integrity checks you're requiring on file_id and star_id might take that much time, however. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] cannot get CREATE TABLE AS to work
Robert, > How then can I add in a DEFAULT nextval in place of SERIAL and get > the > REFERENCES in there? Or can I? You can't (as far as I know). If that's important to you, you need to create the table first with a regular CREATE TABLE statement, then do INSERT INTO. CREATE TABLE AS is, I believe, primarily for temporary tables. It's also a good idea to stay away from CREATE TABLE AS in applications that might have to be portable; this is one statment whose syntax varies widely from platform to platform. For example, Transact-SQL (MS) uses SELECT ... INTO instead of CREATE TABLE AS. -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] cannot get CREATE TABLE AS to work
Creager, Robert S writes: > psql -d tassiv -c "\ > create table observationsII ( \ > ra float8 not null, \ > decl float8 not null, \ > mag float8 not null, \ > smag float8 not null, \ > obs_id serial, \ > file_id int4 references files on delete cascade, \ > star_id int4 references comp_loc on delete set null default null ) \ > AS select o.ra, o.decl, o.mag, o.smag, o.file_id from > observations o" > > ERROR: parser: parse error at or near "AS" Either you have an AS clause, or a column list, not both. > And the next question, should this really be taking 3 hours to insert 315446 > records? I noticed the disk is basically idle during the few times when I > watched. Would this be because of the index created on obs_id? Quite likely. Also make sure you wrap the inserts into a BEGIN/COMMIT block. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] cannot get CREATE TABLE AS to work
Robert, > I suspect that the INSERT INTO SELECT in this case will take longer than a > CREATE TABLE AS because of the referential integrity check needed on every > INSERT (per Tom Lane). In that case, what about: a) dropping the referential integrity check; 2) making the referential integrity check deferrable (there's a way to do this, it was discussed a couple weeks ago - ask Tom). -Josh Berkus -- __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] cannot get CREATE TABLE AS to work
On Fri, 9 Mar 2001, Josh Berkus wrote: > Robert, > > > I suspect that the INSERT INTO SELECT in this case will take longer than a > > CREATE TABLE AS because of the referential integrity check needed on every > > INSERT (per Tom Lane). > > In that case, what about: > > a) dropping the referential integrity check; Unfortunately if he adds it back in with ALTER TABLE, that's going to be slow as well. I did it in a fashion I felt was cleaner code, but in practice, I think the implementation's performance is poor enough that it might be worth doing in the less clean way (running a single select looking for failing rows when possible on alter table rather than checking each row -- less clean because it means keeping information on what the fk check is in multiple places. :( ) > 2) making the referential integrity check deferrable (there's a way to > do this, it was discussed a couple weeks ago - ask Tom). Well, you can always add deferrable initially immediate to the constraint and run a SET CONSTRAINTS ALL DEFERRED, but I'm not sure that'll actually be much faster, it still does a check per row I believe. It's hacky, but I'd say, if you don't have other triggers you care about, twiddle pg_class.reltriggers for the class to 0, do the insert, set it back to what it was before and then run selects to make sure the data is valid (ie, would the constraint have failed). [ assuming one column, something like: select * from fktable where not exists (select * from pktable where pktable.pkcol=fktable.fkcol); ] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
RE: [SQL] cannot get CREATE TABLE AS to work
Well, that explains why I wasn't seeing any appreciable speed increase with the INITIALLY DEFERRED. I tried mucking in pg_class, and saw a 3 fold increase in insert speed on inserts into my table with 2 relational triggers. SET CONSTRAINTS ALL DEFERRED does nothing to very little to increase the insertion speed. 15min 'INITIALLY DEFERRED' vs 13min 'ALL DEFERRED' vs 5min 'pg_class update'. And that 15 vs 13 could be machine activity. I'll appologize to anyone trying to follow this one sided converstion, as I wasn't paying attention to the e-mail headers and though Josh was replying to me only. Cheers, Rob > -Original Message- > > Well, you can always add deferrable initially immediate to > the constraint > and run a SET CONSTRAINTS ALL DEFERRED, but I'm not sure > that'll actually > be much faster, it still does a check per row I believe. > > It's hacky, but I'd say, if you don't have other triggers you > care about, > twiddle pg_class.reltriggers for the class to 0, do the insert, set it > back to what it was before and then run selects to make sure > the data is > valid (ie, would the constraint have failed). > > [ > assuming one column, something like: > > select * from fktable where not exists > (select * from pktable where pktable.pkcol=fktable.fkcol); > ] > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [SQL] cannot get CREATE TABLE AS to work
On Fri, 9 Mar 2001, Creager, Robert S wrote: > > Well, that explains why I wasn't seeing any appreciable speed increase with > the INITIALLY DEFERRED. I tried mucking in pg_class, and saw a 3 fold > increase in insert speed on inserts into my table with 2 relational > triggers. SET CONSTRAINTS ALL DEFERRED does nothing to very little to > increase the insertion speed. 15min 'INITIALLY DEFERRED' vs 13min 'ALL > DEFERRED' vs 5min 'pg_class update'. And that 15 vs 13 could be machine > activity. Yeah, theoretically if we could do something where it knew that there were alot of them and tried to fall back to doing a single big check rather than lots of little ones we'd get a performance increase, but I can't really think of a good way to do that with what we have right now... ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Optimization via explicit JOINs
Greetings, I've been toying aroudn with postgres 7.1beta5's ability to control the planner via explicitely JOINing tables. I then (just for giggles) compare the difference in the EXPLAIN results. I'm no super-mondo-DBA or anything, but in my two attempts so far, the numbers I get out of EXPLAIN have been about 1/2 as small. Below are two EXPLAIN results, am I correct in reading that one is indeed "twice as fast" as the other? I say twice as fast because the top-most cost in the first query is 58.62, but in the second one it's only 32.09. Am I reading this correctly? -- First EXPLAIN -- Sort (cost=58.62..58.62 rows=14 width=60) -> Nested Loop (cost=0.00..58.35 rows=14) -> Nested Loop (cost=0.00..29.99 rows=14) -> Seq Scan on playlist p (cost=0.00..1.61 rows=14) -> Index Scan using songs_pkey on songs s (cost=0.00..2.01 rows=1) -> Index Scan using artists_pkey on artists a (cost=0.00..2.01 rows=1) -- Second EXPLAIN -- Sort (cost=32.09..32.09 rows=1) -> Nested Loop (cost=0.00..32.08 rows=1) -> Nested Loop (cost=0.00..30.06 rows=1) -> Seq Scan on playlist p (cost=0.00..1.61 rows=14) -> Index Scan using songs_pkey on songs s (cost=0.00..2.02 rows=1) -> Index Scan using artists_pkey on artists a (cost=0.00..2.01 rows=1) -- Dave ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Optimization via explicit JOINs
On Fri, 9 Mar 2001, David Olbersen wrote: > Greetings, > I've been toying aroudn with postgres 7.1beta5's ability to control the > planner via explicitely JOINing tables. I then (just for giggles) compare the > difference in the EXPLAIN results. > > I'm no super-mondo-DBA or anything, but in my two attempts so far, the numbers > I get out of EXPLAIN have been about 1/2 as small. > > Below are two EXPLAIN results, am I correct in reading that one is indeed > "twice as fast" as the other? I say twice as fast because the top-most cost in > the first query is 58.62, but in the second one it's only 32.09. Am I reading > this correctly? Not entirely. Those are only estimates, so they don't entirely line up with reality. Also, I notice the first estimates 14 rows and the second 1, which is probably why the estimate is higher. In practice it probably won't be significantly different. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Optimization via explicit JOINs
On Fri, 9 Mar 2001, Stephan Szabo wrote: ->Not entirely. Those are only estimates, so they don't entirely line up ->with reality. Also, I notice the first estimates 14 rows and the second ->1, which is probably why the estimate is higher. In practice it probably ->won't be significantly different. So really I'm just getting back estimations of cost and rows returned? Incidentally, both queries returned the same data set, that's a Good Thing (tm). -- Dave ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Optimization via explicit JOINs
> On Fri, 9 Mar 2001, Stephan Szabo wrote: > > ->Not entirely. Those are only estimates, so they don't entirely line up > ->with reality. Also, I notice the first estimates 14 rows and the second > ->1, which is probably why the estimate is higher. In practice it probably > ->won't be significantly different. > > So really I'm just getting back estimations of cost and rows returned? > Incidentally, both queries returned the same data set, that's a Good Thing (tm). Yeah, explain is mostly ofr showing what it's going to do and a little bit of why it thinks it's a good idea. Hmm, what were the two queries anyway? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Optimization via explicit JOINs
On Fri, 9 Mar 2001, Stephan Szabo wrote: -> Hmm, what were the two queries anyway? The "slower" query SELECT to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in, s.nameas title, a.nameas artist, s.length as length FROM playlist p, songss, artists a WHERE p.waiting = TRUE AND p.song_id = s.song_id AND s.artist_id = a.artist_id ORDER BY p.item_id The "faster" query SELECT to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in, s.nameas title, s.length as length, a.nameas artist FROM playlist p JOIN songs s USING (song_id), artists a WHERE p.waiting = TRUE AND p.song_id = s.song_id AND s.artist_id = a.artist_id ORDER BY p.item_id; Notice how the only difference is in the FROM clause? -- Dave ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] from PosgreSQL 7.1b3 to 7.0.3
On Sun, 11 Mar 2001 05:27, Najm Hashmi wrote: > I have PosgreSQL 7.1b3 running on one of our test servers. It seems > like PosgreSQL 7.1b3 is not very stable. I want to go back to 7.0.3v > since it it the most stable version available. Another, imho better, alternative is to move forward to either the beta5 release or to use the current cvs. There have been big and good changes between 7.0.x and 7.1. If you wait just a few ( approximately 10 ) days 7.1 will released as a public release. -- Sincerely etc., NAME Christopher Sawtell CELL PHONE 021 257 4451 ICQ UIN45863470 EMAIL csawtell @ xtra . co . nz CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz -->> Please refrain from using HTML or WORD attachments in e-mails to me <<-- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Optimization via explicit JOINs
On Fri, 9 Mar 2001, David Olbersen wrote: > On Fri, 9 Mar 2001, Stephan Szabo wrote: > > -> Hmm, what were the two queries anyway? > > The "slower" query > > SELECT > to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in, > s.nameas title, > a.nameas artist, > s.length as length > FROM > playlist p, > songss, > artists a > WHERE > p.waiting = TRUE AND > p.song_id = s.song_id AND > s.artist_id = a.artist_id > ORDER BY p.item_id > > The "faster" query > > SELECT > to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in, > s.nameas title, > s.length as length, > a.nameas artist > FROM > playlist p JOIN songs s USING (song_id), > artists a > WHERE > p.waiting = TRUE AND > p.song_id = s.song_id AND > s.artist_id = a.artist_id > ORDER BY p.item_id; > > Notice how the only difference is in the FROM clause? Yeah. It's getting the same plan, just a slightly different number of estimated rows (14 and 1) from the join of p to s. As a question, how many rows does select * from playlist p join songs s using (song_id) where p.waiting=TRUE; actually result in? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Optimization via explicit JOINs
On Fri, 9 Mar 2001, Stephan Szabo wrote: ->As a question, how many rows does ->select * from playlist p join songs s using (song_id) where ->p.waiting=TRUE; ->actually result in? Well it depends. Most of the time that playlist table is "empty" (no rows where waiting = TRUE), however users can (in a round about way) insert into that table, so that there could be anywhere from 10, to 2,342, to more. Why do you ask? (The reason those plans chose 14 was because, at the time, there were 14 rows in playlist) -- Dave ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Optimization via explicit JOINs
Darn. Well, one of the queries picked that 1 row was going to survive the nested loop step and the other said 14. I was wondering which one was closer to being correct at that time. On Fri, 9 Mar 2001, David Olbersen wrote: > On Fri, 9 Mar 2001, Stephan Szabo wrote: > > ->As a question, how many rows does > ->select * from playlist p join songs s using (song_id) where > ->p.waiting=TRUE; > ->actually result in? > > Well it depends. Most of the time that playlist table is "empty" (no rows where > waiting = TRUE), however users can (in a round about way) insert into that > table, so that there could be anywhere from 10, to 2,342, to more. > > Why do you ask? > > (The reason those plans chose 14 was because, at the time, there were 14 rows in > playlist) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Optimization via explicit JOINs
Actually, just thought of something else. If you remove the probably redundant p.song_id=s.song_id from the second query (since the join ... using should do that) does it change the explain output? On Fri, 9 Mar 2001, David Olbersen wrote: > On Fri, 9 Mar 2001, Stephan Szabo wrote: > > ->As a question, how many rows does > ->select * from playlist p join songs s using (song_id) where > ->p.waiting=TRUE; > ->actually result in? > > Well it depends. Most of the time that playlist table is "empty" (no rows where > waiting = TRUE), however users can (in a round about way) insert into that > table, so that there could be anywhere from 10, to 2,342, to more. > > Why do you ask? > > (The reason those plans chose 14 was because, at the time, there were 14 rows in > playlist) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Optimization via explicit JOINs
Stephan Szabo <[EMAIL PROTECTED]> writes: > Actually, just thought of something else. If you remove > the probably redundant p.song_id=s.song_id from the second > query (since the join ... using should do that) does it > change the explain output? I was just about to point that out. The WHERE clause *is* redundant with the JOIN ... USING clause, but the planner will not recognize that, and accordingly will multiply the estimated selectivity of the two clauses together. So the output row count for the JOIN form is misleadingly small. If you remove the redundant WHERE clause then you should get identical planning estimates for both forms of the query. The planner does actually recognize and discard duplicate qualifiers that appear in the same place (eg, WHERE p.song_id=s.song_id AND p.song_id=s.song_id) but it's not so smart about qualifiers that are in different parts of the query... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
