[SQL] Need a SQL to create sets of hobbies

2006-09-19 Thread CN
Hi! CREATE TABLE x1 (name TEXT, hobby TEXT, PRIMARY KEY(name,hobby)); INSERT INTO x1 VALUES('John','music'); INSERT INTO x1 VALUES('John','arts'); INSERT INTO x1 VALUES('Bob','arts'); INSERT INTO x1 VALUES('Bob','music'); INSERT INTO x1 VALUES('Rocky','copmputer'); INSERT INTO x1 VALUES('Steve','

Re: [SQL] Nested loops are killing throughput

2006-09-19 Thread CG
>Is that the one off gborg? It's broken because the equality function is >marked volatile, and so the planner is afraid to try to use it for >merging or hashing. (It's also not marked strict, which means you can >trivially crash the backend by passing it a null ...) It is the one from gborg. Tha

Re: [SQL] Nested loops are killing throughput

2006-09-19 Thread Tom Lane
CG <[EMAIL PROTECTED]> writes: >> If packet_status is large, that seems like a perfectly reasonable plan >> to me. If not ... what data type is packet_uuid? Is its equality op >> marked mergeable or hashable? > It is of type uniqueidentifier ... Is that the one off gborg? It's broken because

[SQL] Make Trigger run after completion of ENTIRE transaction

2006-09-19 Thread Henry Ortega
This maybe more of a theoretical question, can you actually make a Trigger run after completion of the entire transaction?   Here's what I have: LOG user   |    startdate   |   enddate   enddate is getting updated by a trigger (on insert or update).   I have the following transaction: BEGIN

Re: [SQL] Nested loops are killing throughput

2006-09-19 Thread CG
>If packet_status is large, that seems like a perfectly reasonable plan >to me. If not ... what data type is packet_uuid? Is its equality op >marked mergeable or hashable? It is of type uniqueidentifier ... CREATE OPERATOR =( PROCEDURE = uniqueidentifier_eq, LEFTARG = uniqueidentifier,

Re: [SQL] Nested loops are killing throughput

2006-09-19 Thread Tom Lane
CG <[EMAIL PROTECTED]> writes: > I'm still getting the nested loop for a join. > Nested Loop (cost=1.00..100013378.98 rows=2206 width=145) (actual > time=46.743..18202.318 rows=2225 loops=1) > -> Index Scan using packet_user_idx on packet dp (cost=0.00..88.03 > rows=2206 width=125)

Re: [SQL] Nested loops are killing throughput

2006-09-19 Thread CG
- Original Message From: Tom Lane <[EMAIL PROTECTED]> To: CG <[EMAIL PROTECTED]> Cc: pgsql-sql@postgresql.org Sent: Tuesday, September 19, 2006 11:03:07 AM Subject: Re: [SQL] Nested loops are killing throughput CG <[EMAIL PROTECTED]> writes: > -> Nested Loop (cost=0

Re: [SQL] Count and list only where count is bigger than 1

2006-09-19 Thread A. Kretschmer
am Tue, dem 19.09.2006, um 12:04:55 -0300 mailte Ezequias Rodrigues da Rocha folgendes: > Hi list, > > It is possible to make a count select and only display where count column is > bigger than 1 ? > > My SQL is like this > > SELECT distinct cli.bairro, COUNT( * ) as qtd > FROM base.cliente cl

Re: [SQL] Count and list only where count is bigger than 1

2006-09-19 Thread bnichols
> Hi list, > > It is possible to make a count select and only display where count column > is > bigger than 1 ? > > My SQL is like this > > SELECT distinct cli.bairro, COUNT( * ) as qtd > FROM base.cliente cli > GROUP BY cli.cidade, cli.bairro > ORDER BY 2 > > I noticed that I cannot use "where qtd

Re: [SQL] Count and list only where count is bigger than 1

2006-09-19 Thread Mezei Zoltán
Ezequias Rodrigues da Rocha wrote: SELECT distinct cli.bairro, COUNT( * ) as qtd FROM base.cliente cli GROUP BY cli.cidade, cli.bairro ORDER BY 2 I noticed that I cannot use "where qtd > 1" ok ? What to do ? Google/read tutorial for HAVING. E.g. SELECT distinct cli.bairro, COUNT( * )

[SQL] Count and list only where count is bigger than 1

2006-09-19 Thread Ezequias Rodrigues da Rocha
Hi list,It is possible to make a count select and only display where count column is bigger than 1 ?My SQL is like thisSELECT distinct cli.bairro, COUNT( * ) as qtdFROM base.cliente cliGROUP BY cli.cidade, cli.bairroORDER BY 2I noticed that I cannot use "where qtd > 1" ok ?What to do ?Regards ...-

Re: [SQL] Nested loops are killing throughput

2006-09-19 Thread Tom Lane
CG <[EMAIL PROTECTED]> writes: > -> Nested Loop (cost=0.00..744.28 rows=1 width=16) > (actual time=31.227..6980.765 rows=5436 loops=1) > -> Nested Loop (cost=0.00..135.29 rows=101 > width=16) (actual time=25.514..273.660 rows=5436 loops=1) > ... >

[SQL] Nested loops are killing throughput

2006-09-19 Thread CG
Postgresql 8.1 I've tried turning off nested loops to see what the query planner would choose instead of nested loops. It chose a hash join, and it still had a nested loop in the mix! How can I entice the query planner to use a simpler join scheme? What criteria is used to determine whether or

Re: [SQL] inner join is much faster! is that right?

2006-09-19 Thread James Im
I was doing the explain thing when I discovered my mistake! The think is that I over simplyfied my examples. instead of searching for one ID, I was searching for several ids and I forgot to put a parentesis arround the or like this. I was doing this: select * from file a, file_tag b, tag c where

Re: [SQL] inner join is much faster! is that right?

2006-09-19 Thread Chris Mair
> I really thought that Postgresql would rewrite a query from > > select * > from file a, file_tag b, tag c > where a.id_file=b.id_file and b.id_tag=c.id_tag and a.id_file=100100 > > to something like: > > select * > from (file a inner join file_tag b on (a.id_file=b.id_file)) inner joi

[SQL] Borland Database Engine assumes Varchar(255) as Memo

2006-09-19 Thread Ezequias Rodrigues da Rocha
Hi list,I don't know if there is anybody using postgresql with Delphi, but I would like to exange some knowledge with some of you. BDE assumes my Varchar(255) as Memo and it reports many problem to my programming. Does someone already noticed it and can help me ?Regards ...-- =-=-=-=-=-=-=-=-=-=-=

[SQL] inner join is much faster! is that right?

2006-09-19 Thread James Im
I really thought that Postgresql would rewrite a query from select * from file a, file_tag b, tag c where a.id_file=b.id_file and b.id_tag=c.id_tag and a.id_file=100100 to something like: select * from (file a inner join file_tag b on (a.id_file=b.id_file)) inner join tag c on (b.id_tag