Re: [SQL] Referential integrity broken (8.0.3), sub-select help

2006-03-22 Thread Patrick JACQUOT

[EMAIL PROTECTED] wrote:


Hello,

I've got 2 tables, "url" (U), and "bookmark" (B), with "bookmark" pointing to 
"url" via FK.
Somehow I ended up with some rows in B referencing non-existent rows in U.
This sounds super strange and dangerous to me, and it's not clear to me how/why 
PG let this happen.
I'm using 8.0.3.

Here are the table references I just mentioned:

Table "bookmark":
id  SERIAL
CONSTRAINT pk_bookmark_id PRIMARY KEY

Table "url":
   url_id  INTEGER
CONSTRAINT fk_bookmark_id REFERENCES bookmark(id)

 

Your DDL  doesn't say :  "B references U", but the contrary : "U 
references B".
So it's perfectly right that somes tuples in B are not referenced by 
tuples in U.

Please correct your constraints.


Problem #1: Strange that PG allowed this to happen.  Maybe my DDL above allows 
this to happen and needs to be tightened?  I thought the above would ensure 
referential integrity, but maybe I need to specify something else?

Problem #2: I'd like to find all rows in B that point to non-existent rows in 
U.  I can do it with the following sub-select, I believe, but it's rather 
inefficient (EXPLAIN shows both tables would be sequentially scanned):

 SELECT * FROM bookmark WHERE id NOT IN (SELECT b.id FROM bookmark b, url u 
WHERE b.url_id=u.id);

Is there a more efficient way to get the rows from "bookmark"?

Thanks,
Otis


---(end of broadcast)---
TIP 6: explain analyze is your friend

 


I think, for that one Scott's answer is OK
You could also try SELECT * FROM url U WHERE NOT EXISTS(SELECT * FROM 
bookmark B WHERE B.url-id=U.id)

and see wich one is faster

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] COPY tablename FROM and null values

2006-03-22 Thread ivan marchesini
Dear users,
I'm working on a Postgres 7.4 server

I have a .txt file, containing some tabular data, where data are
delimited by TABs.
there are 3 columns:

column1 int4, column2 float8, column3 float8

the problem is that column3 contains also null values (i.e. sometimes is
empty)

so when I try to use COPY tablename FROM 'filename.txt' I obtain an
error

I have tried also using " WITH NULL AS ' ' " but without good results...

can someone explain me how to solve this problem???

thank you very much

Ivan
 



-- 
Ivan Marchesini
Department of Civil and Environmental Engineering
University of Perugia
Via G. Duranti 93/a 
06125
Perugia (Italy)
e-mail: [EMAIL PROTECTED]
[EMAIL PROTECTED]
tel: +39(0)755853760
fax: +39(0)755853756
jabber: [EMAIL PROTECTED]




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] COPY tablename FROM and null values

2006-03-22 Thread Achilleus Mantzios
O ivan marchesini έγραψε στις Mar 22, 2006 :

> Dear users,
> I'm working on a Postgres 7.4 server
> 
> I have a .txt file, containing some tabular data, where data are
> delimited by TABs.
> there are 3 columns:
> 
> column1 int4, column2 float8, column3 float8
> 
> the problem is that column3 contains also null values (i.e. sometimes is
> empty)
> 
> so when I try to use COPY tablename FROM 'filename.txt' I obtain an
> error
> 
> I have tried also using " WITH NULL AS ' ' " but without good results...
> 
> can someone explain me how to solve this problem???

batch edit your file (with sed,awk,perl,C,java,...) and build your 
explicit INSERT statements in some version of your file.

> 
> thank you very much
> 
> Ivan
>  
> 
> 
> 
> 

-- 
-Achilleus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] COPY tablename FROM and null values

2006-03-22 Thread Luckys

On 3/22/06, Achilleus Mantzios <[EMAIL PROTECTED]> wrote:
O ivan marchesini έγραψε στις Mar 22, 2006 :> Dear users,> I'm working on a Postgres 7.4
 server>> I have a .txt file, containing some tabular data, where data are> delimited by TABs.> there are 3 columns:>> column1 int4, column2 float8, column3 float8>> the problem is that column3 contains also null values (
i.e. sometimes is> empty)>> so when I try to use COPY tablename FROM 'filename.txt' I obtain an> error>> I have tried also using " WITH NULL AS ' ' " but without good results...
>> can someone explain me how to solve this problem???batch edit your file (with sed,awk,perl,C,java,...) and build yourexplicit INSERT statements in some version of your file.
 
or Replace the empty spaces in the file with some other value (0.0)
>> thank you very much>> Ivan>--
-Achilleus---(end of broadcast)---TIP 5: don't forget to increase your free space map settings


Re: [SQL] COPY tablename FROM and null values

2006-03-22 Thread JSP
yup... Replace the empty spaces in the file with some other value... 

replace null values with \N

then try COPYing againOn 3/22/06, Luckys <[EMAIL PROTECTED]> wrote:

On 3/22/06, Achilleus Mantzios <
[EMAIL PROTECTED]> wrote:
O ivan marchesini έγραψε στις Mar 22, 2006 :> Dear users,> I'm working on a Postgres 
7.4
 server>> I have a .txt file, containing some tabular data, where data are> delimited by TABs.> there are 3 columns:>> column1 int4, column2 float8, column3 float8>> the problem is that column3 contains also null values (
i.e. sometimes is> empty)>> so when I try to use COPY tablename FROM 'filename.txt' I obtain an> error>> I have tried also using " WITH NULL AS ' ' " but without good results...
>> can someone explain me how to solve this problem???batch edit your file (with sed,awk,perl,C,java,...) and build yourexplicit INSERT statements in some version of your file.
 
or Replace the empty spaces in the file with some other value (0.0)
>> thank you very much>> Ivan>
--
-Achilleus---(end of broadcast)---TIP 5: don't forget to increase your free space map settings

-- "The trouble with the rat race is that even if you win, you're still a rat." --- Lily Tomlin


Re: [SQL] Using a parameter in Interval

2006-03-22 Thread Davidson, Robert
That worked perfectly - thanks!


CREATE OR REPLACE FUNCTION Testing(TrailingWeeks int) RETURNS date AS $$
BEGIN
RETURN current_date - (TrailingWeeks || ' weeks')::INTERVAL;
END;
$$ LANGUAGE plpgsql;

select * from testing(1);

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Owen Jacobson
Sent: Tuesday, March 21, 2006 4:58 PM
To: [email protected]
Subject: Re: [SQL] Using a parameter in Interval

Here's one I used to convert an int to an interval in another project:

CREATE OR REPLACE FUNCTION to_interval (sec INTEGER) RETURNS INTERVAL AS $$
BEGIN
  RETURN (sec || ' seconds')::INTERVAL;
END;
$$ LANGUAGE 'plpgsql' IMMUTABLE STRICT;

select to_interval (5);
 to_interval
-
 00:00:05
(1 row)

You should be able to replace ' seconds' with ' weeks' just fine.

Excuse the outlook-ism,
-Owen
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Davidson, Robert
Sent: Tuesday, March 21, 2006 4:53 PM
To: [email protected]
Subject: [SQL] Using a parameter in Interval


No matter how I try to concatenate, I can't seem to get a parameter to be used 
by INTERVAL in a function:
CREATE OR REPLACE FUNCTION Testing(TrailingWeeks int) RETURNS date AS $$
BEGIN
RETURN current_date - INTERVAL (CAST(TrailingWeeks AS varchar) || ' 
weeks');
END;
$$ LANGUAGE plpgsql;
--select * from testing(1);
ERROR:  syntax error at or near "CAST" at character 34
QUERY:  SELECT  current_date - INTERVAL (CAST( $1  AS varchar) || ' weeks')
CONTEXT:  SQL statement in PL/PgSQL function "testing" near line 2
I have tried concatenating it as a declared variable (with and without 
apostrophes)
1 weeks
And 
'1 weeks'
With no success. Any tips?
Many thanks,
Robert

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] COPY tablename FROM and null values

2006-03-22 Thread Tom Lane
ivan marchesini <[EMAIL PROTECTED]> writes:
> I have a .txt file, containing some tabular data, where data are
> delimited by TABs.
> there are 3 columns:
> column1 int4, column2 float8, column3 float8
> the problem is that column3 contains also null values (i.e. sometimes is
> empty)

This should work, unless the file is set up so that the second TAB is
missing when column3 is "empty".  If so, you'll need to fix that.
COPY is going to complain if there aren't exactly two TABs on every
line.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello pgsql-sql,

  I have postgresql 8.1.3 and database with about 2,7GB (90% large
  objects).

  When I execute this query postgresql calculate this 2min 50sec. How
  can I optimize this query?
  
select towar.id_towar,towar.key2,towar.nazwa,0 as min,0 as

max,towar.ilosc_jed,towar.ilosc_nom,towar.ilosc_paczkowa,dostawcy.id_dostawcy,jednostka_miary.jednostka,0.0
AS ilosc_magazyn,(select 
sum(zlecenia_towar.ilosc*zlecenia_elementy.ilosc)
from
zlecenia_towar,zlecenia_elementy,zlecenia
where
zlecenia_towar.id_towar=towar.id_towar and 
zlecenia_towar.id_zlecenia_elementy=zlecenia_elementy.id_zlecenia_elementy
and zlecenia_elementy.id_zlecenia=zlecenia.id_zlecenia
and (zlecenia.id_paczka in (52,50,53))) as z_zamowien,towar.key1
from (towar LEFT JOIN dostawcy ON (towar.id_dostawcy = 
dostawcy.id_dostawcy) LEFT JOIN jednostka_miary ON
(towar.id_jednostka_miary = jednostka_miary.id_jednostka_miary)) where 
towar.policzalne=True group by

towar.id_towar,towar.key2,towar.nazwa,towar.min1,towar.max1,towar.ilosc_jed,towar.ilosc_nom,dostawcy.id_dostawcy
, jednostka_miary.jednostka,towar.ilosc_paczkowa,towar.key1 order by 
id_dostawcy;  

-- 
Best regards,
 Maciej  mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] How to optimize this query?

2006-03-22 Thread Markus Bertheau
Send an EXPLAIN ANALYZE of the query along with the description of the
involved tables. Also hardware information (RAM, disks, CPU), what
other applications are running on that box and the parameter values in
postgresql.conf that you changed from the defaults would be
interesting.

Markus

2006/3/22, Maciej Piekielniak <[EMAIL PROTECTED]>:
> Hello pgsql-sql,
>
>   I have postgresql 8.1.3 and database with about 2,7GB (90% large
>   objects).
>
>   When I execute this query postgresql calculate this 2min 50sec. How
>   can I optimize this query?
>
> select towar.id_towar,towar.key2,towar.nazwa,0 as min,0 as
> 
> max,towar.ilosc_jed,towar.ilosc_nom,towar.ilosc_paczkowa,dostawcy.id_dostawcy,jednostka_miary.jednostka,0.0
> AS ilosc_magazyn,(select 
> sum(zlecenia_towar.ilosc*zlecenia_elementy.ilosc)
> from
> zlecenia_towar,zlecenia_elementy,zlecenia
> where
> zlecenia_towar.id_towar=towar.id_towar and 
> zlecenia_towar.id_zlecenia_elementy=zlecenia_elementy.id_zlecenia_elementy
> and zlecenia_elementy.id_zlecenia=zlecenia.id_zlecenia
> and (zlecenia.id_paczka in (52,50,53))) as z_zamowien,towar.key1
> from (towar LEFT JOIN dostawcy ON (towar.id_dostawcy = 
> dostawcy.id_dostawcy) LEFT JOIN jednostka_miary ON
> (towar.id_jednostka_miary = jednostka_miary.id_jednostka_miary)) 
> where towar.policzalne=True group by
> 
> towar.id_towar,towar.key2,towar.nazwa,towar.min1,towar.max1,towar.ilosc_jed,towar.ilosc_nom,dostawcy.id_dostawcy
> , jednostka_miary.jednostka,towar.ilosc_paczkowa,towar.key1 order by 
> id_dostawcy;
>
> --
> Best regards,
>  Maciej  mailto:[EMAIL PROTECTED]
>
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

Wednesday, March 22, 2006, 6:58:44 PM, you wrote:

MB> Send an EXPLAIN ANALYZE of the query along with the description of the
MB> involved tables. Also hardware information (RAM, disks, CPU), what
MB> other applications are running on that box and the parameter values in
MB> postgresql.conf that you changed from the defaults would be
MB> interesting.


 Sort  (cost=21413847.71..21413867.37 rows=7864 width=107)
   Sort Key: dostawcy.id_dostawcy
   ->  Group  (cost=1360.03..21413073.50 rows=7864 width=107)
 ->  Sort  (cost=1360.03..1379.69 rows=7864 width=107)
   Sort Key: towar.id_towar, towar.key2, towar.nazwa, towar.min1, 
towar.max1, towar.ilosc_jed, towar.ilosc_nom, dostawcy.id_dostawcy, 
jednostka_miary.jednostka, towar.ilosc_paczkowa, towar.key1
   ->  Hash Left Join  (cost=2.21..585.81 rows=7864 width=107)
 Hash Cond: ("outer".id_jednostka_miary = 
"inner".id_jednostka_miary)
 ->  Hash Left Join  (cost=1.14..466.78 rows=7864 width=103)
   Hash Cond: ("outer".id_dostawcy = 
"inner".id_dostawcy)
   ->  Seq Scan on towar  (cost=0.00..347.68 rows=7864 
width=103)
 Filter: policzalne
   ->  Hash  (cost=1.11..1.11 rows=11 width=4)
 ->  Seq Scan on dostawcy  (cost=0.00..1.11 
rows=11 width=4)
 ->  Hash  (cost=1.06..1.06 rows=6 width=12)
   ->  Seq Scan on jednostka_miary  (cost=0.00..1.06 
rows=6 width=12)
 SubPlan
   ->  Aggregate  (cost=2722.71..2722.72 rows=1 width=14)
 ->  Nested Loop  (cost=64.33..2722.28 rows=171 width=14)
   ->  Hash Join  (cost=64.33..602.79 rows=368 width=12)
 Hash Cond: ("outer".id_zlecenia = 
"inner".id_zlecenia)
 ->  Seq Scan on zlecenia_elementy  
(cost=0.00..488.85 rows=9185 width=20)
 ->  Hash  (cost=63.98..63.98 rows=140 width=8)
   ->  Bitmap Heap Scan on zlecenia  
(cost=6.50..63.98 rows=140 width=8)
 Recheck Cond: ((id_paczka = 52) OR 
(id_paczka = 50) OR (id_paczka = 53))
 ->  BitmapOr  (cost=6.50..6.50 
rows=142 width=0)
   ->  Bitmap Index Scan on 
zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
 Index Cond: (id_paczka = 
52)
   ->  Bitmap Index Scan on 
zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
 Index Cond: (id_paczka = 
50)
   ->  Bitmap Index Scan on 
zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
 Index Cond: (id_paczka = 
53)
   ->  Index Scan using zlezenia_towar_elementy_towar on 
zlecenia_towar  (cost=0.00..5.75 rows=1 width=18)
 Index Cond: ((zlecenia_towar.id_zlecenia_elementy 
= "outer".id_zlecenia_elementy) AND (zlecenia_towar.id_towar = $0))
(33 rows)

-- 
Best regards,
 Maciejmailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] How to optimize this query?

2006-03-22 Thread Markus Bertheau
That's an explain. We need explain analyze.

2006/3/23, Maciej Piekielniak <[EMAIL PROTECTED]>:
> Hello Markus,
>
> Wednesday, March 22, 2006, 6:58:44 PM, you wrote:
>
> MB> Send an EXPLAIN ANALYZE of the query along with the description of the
> MB> involved tables. Also hardware information (RAM, disks, CPU), what
> MB> other applications are running on that box and the parameter values in
> MB> postgresql.conf that you changed from the defaults would be
> MB> interesting.
>
>
>  Sort  (cost=21413847.71..21413867.37 rows=7864 width=107)
>Sort Key: dostawcy.id_dostawcy
>->  Group  (cost=1360.03..21413073.50 rows=7864 width=107)
>  ->  Sort  (cost=1360.03..1379.69 rows=7864 width=107)
>Sort Key: towar.id_towar, towar.key2, towar.nazwa, towar.min1, 
> towar.max1, towar.ilosc_jed, towar.ilosc_nom, dostawcy.id_dostawcy, 
> jednostka_miary.jednostka, towar.ilosc_paczkowa, towar.key1
>->  Hash Left Join  (cost=2.21..585.81 rows=7864 width=107)
>  Hash Cond: ("outer".id_jednostka_miary = 
> "inner".id_jednostka_miary)
>  ->  Hash Left Join  (cost=1.14..466.78 rows=7864 
> width=103)
>Hash Cond: ("outer".id_dostawcy = 
> "inner".id_dostawcy)
>->  Seq Scan on towar  (cost=0.00..347.68 
> rows=7864 width=103)
>  Filter: policzalne
>->  Hash  (cost=1.11..1.11 rows=11 width=4)
>  ->  Seq Scan on dostawcy  (cost=0.00..1.11 
> rows=11 width=4)
>  ->  Hash  (cost=1.06..1.06 rows=6 width=12)
>->  Seq Scan on jednostka_miary  (cost=0.00..1.06 
> rows=6 width=12)
>  SubPlan
>->  Aggregate  (cost=2722.71..2722.72 rows=1 width=14)
>  ->  Nested Loop  (cost=64.33..2722.28 rows=171 width=14)
>->  Hash Join  (cost=64.33..602.79 rows=368 width=12)
>  Hash Cond: ("outer".id_zlecenia = 
> "inner".id_zlecenia)
>  ->  Seq Scan on zlecenia_elementy  
> (cost=0.00..488.85 rows=9185 width=20)
>  ->  Hash  (cost=63.98..63.98 rows=140 width=8)
>->  Bitmap Heap Scan on zlecenia  
> (cost=6.50..63.98 rows=140 width=8)
>  Recheck Cond: ((id_paczka = 52) OR 
> (id_paczka = 50) OR (id_paczka = 53))
>  ->  BitmapOr  (cost=6.50..6.50 
> rows=142 width=0)
>->  Bitmap Index Scan on 
> zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
>  Index Cond: (id_paczka = 
> 52)
>->  Bitmap Index Scan on 
> zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
>  Index Cond: (id_paczka = 
> 50)
>->  Bitmap Index Scan on 
> zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
>  Index Cond: (id_paczka = 
> 53)
>->  Index Scan using zlezenia_towar_elementy_towar on 
> zlecenia_towar  (cost=0.00..5.75 rows=1 width=18)
>  Index Cond: 
> ((zlecenia_towar.id_zlecenia_elementy = "outer".id_zlecenia_elementy) AND 
> (zlecenia_towar.id_towar = $0))
> (33 rows)
>
> --
> Best regards,
>  Maciejmailto:[EMAIL PROTECTED]
>
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq
>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] How to optimize this query?

2006-03-22 Thread Markus Bertheau
In the meantime, try this:

SELECT
towar.id_towar,
towar.key2,
towar.nazwa,
0 AS min,
0 AS max,
towar.ilosc_jed,
towar.ilosc_nom,
towar.ilosc_paczkowa,
dostawcy.id_dostawcy,
jednostka_miary.jednostka,
0.0 AS ilosc_magazyn,
foo.z_zamowien,
towar.key1
FROM
towar
JOIN (
SELECT
zlecenia_towar.id_towar,
SUM(zlecenia_towar.ilosc * zlecenia_elementy.ilosc) as 
z_zamowien
FROM
zlecenia_towar,
zlecenia_elementy,
zlecenia
WHERE
zlecenia_towar.id_zlecenia_elementy = 
zlecenia_elementy.id_zlecenia_elementy
AND zlecenia_elementy.id_zlecenia = zlecenia.id_zlecenia
AND zlecenia.id_paczka IN (52,50,53)
GROUP BY zlecenia_towar.id_towar
) AS foo ON (foo.id_towar = towar.id_towar)
LEFT JOIN dostawcy
ON (towar.id_dostawcy = dostawcy.id_dostawcy)
LEFT JOIN jednostka_miary
ON (towar.id_jednostka_miary = 
jednostka_miary.id_jednostka_miary)
WHERE
towar.policzalne = True
GROUP BY
towar.id_towar,
towar.key2,
towar.nazwa,
towar.min1,
towar.max1,
towar.ilosc_jed,
towar.ilosc_nom,
dostawcy.id_dostawcy,
jednostka_miary.jednostka,
towar.ilosc_paczkowa,
towar.key1
ORDER BY
id_dostawcy;

I basically pulled the subselect from the field list into the from list.

2006/3/23, Markus Bertheau <[EMAIL PROTECTED]>:
> That's an explain. We need explain analyze.
>
> 2006/3/23, Maciej Piekielniak <[EMAIL PROTECTED]>:
> > Hello Markus,
> >
> > Wednesday, March 22, 2006, 6:58:44 PM, you wrote:
> >
> > MB> Send an EXPLAIN ANALYZE of the query along with the description of the
> > MB> involved tables. Also hardware information (RAM, disks, CPU), what
> > MB> other applications are running on that box and the parameter values in
> > MB> postgresql.conf that you changed from the defaults would be
> > MB> interesting.
> >
> >
> >  Sort  (cost=21413847.71..21413867.37 rows=7864 width=107)
> >Sort Key: dostawcy.id_dostawcy
> >->  Group  (cost=1360.03..21413073.50 rows=7864 width=107)
> >  ->  Sort  (cost=1360.03..1379.69 rows=7864 width=107)
> >Sort Key: towar.id_towar, towar.key2, towar.nazwa, 
> > towar.min1, towar.max1, towar.ilosc_jed, towar.ilosc_nom, 
> > dostawcy.id_dostawcy, jednostka_miary.jednostka, towar.ilosc_paczkowa, 
> > towar.key1
> >->  Hash Left Join  (cost=2.21..585.81 rows=7864 width=107)
> >  Hash Cond: ("outer".id_jednostka_miary = 
> > "inner".id_jednostka_miary)
> >  ->  Hash Left Join  (cost=1.14..466.78 rows=7864 
> > width=103)
> >Hash Cond: ("outer".id_dostawcy = 
> > "inner".id_dostawcy)
> >->  Seq Scan on towar  (cost=0.00..347.68 
> > rows=7864 width=103)
> >  Filter: policzalne
> >->  Hash  (cost=1.11..1.11 rows=11 width=4)
> >  ->  Seq Scan on dostawcy  (cost=0.00..1.11 
> > rows=11 width=4)
> >  ->  Hash  (cost=1.06..1.06 rows=6 width=12)
> >->  Seq Scan on jednostka_miary  
> > (cost=0.00..1.06 rows=6 width=12)
> >  SubPlan
> >->  Aggregate  (cost=2722.71..2722.72 rows=1 width=14)
> >  ->  Nested Loop  (cost=64.33..2722.28 rows=171 width=14)
> >->  Hash Join  (cost=64.33..602.79 rows=368 width=12)
> >  Hash Cond: ("outer".id_zlecenia = 
> > "inner".id_zlecenia)
> >  ->  Seq Scan on zlecenia_elementy  
> > (cost=0.00..488.85 rows=9185 width=20)
> >  ->  Hash  (cost=63.98..63.98 rows=140 width=8)
> >->  Bitmap Heap Scan on zlecenia  
> > (cost=6.50..63.98 rows=140 width=8)
> >  Recheck Cond: ((id_paczka = 52) OR 
> > (id_paczka = 50) OR (id_paczka = 53))
> >  ->  BitmapOr  (cost=6.50..6.50 
> > rows=142 width=0)
> >->  Bitmap Index Scan on 
> > zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
> >  Index Cond: (id_paczka 
> > = 52)
> >->  Bitmap Index Scan on 
> > zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0)
> >  Index Cond: (id_paczka 
> > = 50)
> >->  Bitmap Index Scan on 
> > zlecenia_id_paczka  (cost=0.00..2.17 rows=4

Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

Wednesday, March 22, 2006, 7:32:11 PM, you wrote:

MB> foo.z_zamowien,

MB> ) AS foo ON (foo.id_towar = towar.id_towar)
foo?

-- 
Best regards,
 Maciejmailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

ERROR:  column "foo.z_zamowien" must appear in the GROUP BY clause or be used 
in an aggregate function

-- 
Best regards,
 Maciejmailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

Sorry, I try this:
SELECT
towar.id_towar,
towar.key2,
towar.nazwa,
0 AS min,
0 AS max,
towar.ilosc_jed,
towar.ilosc_nom,
towar.ilosc_paczkowa,
dostawcy.id_dostawcy,
jednostka_miary.jednostka,
0.0 AS ilosc_magazyn,
foo.z_zamowien,
towar.key1
FROM
towar
JOIN (
SELECT
zlecenia_towar.id_towar,
SUM(zlecenia_towar.ilosc * zlecenia_elementy.ilosc) as 
z_zamowien
FROM
zlecenia_towar,
zlecenia_elementy,
zlecenia
WHERE
zlecenia_towar.id_zlecenia_elementy = 
zlecenia_elementy.id_zlecenia_elementy
AND zlecenia_elementy.id_zlecenia = zlecenia.id_zlecenia
AND zlecenia.id_paczka IN (52,50,53)
GROUP BY zlecenia_towar.id_towar
) AS foo ON (foo.id_towar = towar.id_towar)
LEFT JOIN dostawcy
ON (towar.id_dostawcy = dostawcy.id_dostawcy)
LEFT JOIN jednostka_miary
ON (towar.id_jednostka_miary = 
jednostka_miary.id_jednostka_miary)
WHERE
towar.policzalne = True
GROUP BY
towar.id_towar,
towar.key2,
towar.nazwa,
towar.min1,
towar.max1,
towar.ilosc_jed,
towar.ilosc_nom,
dostawcy.id_dostawcy,
jednostka_miary.jednostka,
towar.ilosc_paczkowa,
towar.key1,
foo.z_zamowien
ORDER BY
id_dostawcy;

-- 
Best regards,
 Maciejmailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

Oryginal query return 7881 rows , your query only 729 rows.

-- 
Best regards,
 Maciejmailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] How to optimize this query?

2006-03-22 Thread Jeffrey Melloy

Maciej Piekielniak wrote:


Hello Markus,

Oryginal query return 7881 rows , your query only 729 rows.

 


But it's faster!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

Wednesday, March 22, 2006, 8:12:35 PM, you wrote:

MB> Well, send the table descriptions, please.

 \d towar
 max3   | smallint   | default 0
 max4   | smallint   | default 0
 typik  | character varying(1)   | default '_'::character varying
 id_grupa_rabatowa  | integer| not null default 0
 id_jednostka_miary | integer| not null default 0
 id_vat | integer| not null default 0
 id_typ_towaru  | integer| not null default 0
 id_dostawcy| integer| not null default 0
 grupa_produkcji| smallint   |
 dodatek| boolean| not null default false
 policzalne | boolean| not null default true
 simport| character varying(50)  |
 czy_procent| boolean| not null default false
 subtyp | character varying(35)  |
 kontofk| character varying(40)  |
 typks  | character varying(30)  |
 nazwarodzaju   | character varying(50)  |
 nazwakatalogu  | character varying(250) |
 waluta | character varying(3)   | not null default 
'PLN'::character varying
 bank   | character varying(5)   | not null default 
'NBP'::character varying
 procent_do_wyceny  | smallint   | not null default 0
 waga   | numeric(24,4)  | not null default 0
 cena_z | numeric(24,4)  | not null default 0
 ilosc_paczkowa | numeric(24,4)  | not null default 0
 ilosc_jed  | numeric(24,4)  | not null default 1
 ilosc_nom  | numeric(24,4)  | not null default 1
 odpad  | numeric(24,4)  | not null default 0
 cena_jedn  | numeric(24,4)  | not null default 0
 roboczojednostka   | numeric(24,4)  | not null default 0
Indexes:
"towar_pkey" PRIMARY KEY, btree (id_towar)
"towar_key1" btree (key1)
"towar_key2" btree (key2)
Foreign-key constraints:
"$1" FOREIGN KEY (id_grupa_rabatowa) REFERENCES 
grupa_rabatowa(id_grupa_rabatowa) ON UPDATE CASCADE ON DELETE SET NULL
"$2" FOREIGN KEY (id_jednostka_miary) REFERENCES 
jednostka_miary(id_jednostka_miary) ON UPDATE CASCADE ON DELETE SET NULL
"$3" FOREIGN KEY (id_vat) REFERENCES vat(id_vat) ON UPDATE CASCADE ON 
DELETE SET NULL
"$4" FOREIGN KEY (id_typ_towaru) REFERENCES typ_towaru(id_typ_towaru) ON 
UPDATE CASCADE ON DELETE SET NULL
"$5" FOREIGN KEY (id_dostawcy) REFERENCES dostawcy(id_dostawcy) ON UPDATE 
CASCADE ON DELETE SET NULL
Triggers:
towar_domyslne BEFORE INSERT ON towar FOR EACH ROW EXECUTE PROCEDURE 
domyslne_ustawienia()
zmiana_wagi AFTER UPDATE ON towar FOR EACH ROW EXECUTE PROCEDURE 
waga_przelicz()



-- 
Best regards,
 Maciejmailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] How to optimize this query?

2006-03-22 Thread Maciej Piekielniak
Hello Markus,

Wednesday, March 22, 2006, 8:35:33 PM, you wrote:

MB>Send an EXPLAIN ANALYZE of the query along with the description of the
MB>involved tables. Also hardware information (RAM, disks, CPU), what
MB>other applications are running on that box and the parameter values in
MB>postgresql.conf that you changed from the defaults would be
MB>interesting.

Celeron 1200 Tualatin 256kb cache
HD 200GB 7200
512 SDRAM
Postgresql 8.1.3 on debian sarge with standard settings
No other running applications.



EXPLAIN ANALYZE


"Sort  (cost=21413847.71..21413867.37 rows=7864 width=107) (actual 
time=615902.463..615933.049 rows=7881 loops=1)"
"  Sort Key: dostawcy.id_dostawcy"
"  ->  Group  (cost=1360.03..21413073.50 rows=7864 width=107) (actual 
time=473.511..615628.474 rows=7881 loops=1)"
"->  Sort  (cost=1360.03..1379.69 rows=7864 width=107) (actual 
time=324.260..407.732 rows=7881 loops=1)"
"  Sort Key: towar.id_towar, towar.key2, towar.nazwa, towar.min1, 
towar.max1, towar.ilosc_jed, towar.ilosc_nom, dostawcy.id_dostawcy, 
jednostka_miary.jednostka, towar.ilosc_paczkowa, towar.key1"
"  ->  Hash Left Join  (cost=2.21..585.81 rows=7864 width=107) 
(actual time=0.607..178.794 rows=7881 loops=1)"
"Hash Cond: ("outer".id_jednostka_miary = 
"inner".id_jednostka_miary)"
"->  Hash Left Join  (cost=1.14..466.78 rows=7864 
width=103) (actual time=0.397..121.835 rows=7881 loops=1)"
"  Hash Cond: ("outer".id_dostawcy = 
"inner".id_dostawcy)"
"  ->  Seq Scan on towar  (cost=0.00..347.68 rows=7864 
width=103) (actual time=0.160..60.079 rows=7881 loops=1)"
"Filter: policzalne"
"  ->  Hash  (cost=1.11..1.11 rows=11 width=4) (actual 
time=0.185..0.185 rows=11 loops=1)"
"->  Seq Scan on dostawcy  (cost=0.00..1.11 
rows=11 width=4) (actual time=0.085..0.126 rows=11 loops=1)"
"->  Hash  (cost=1.06..1.06 rows=6 width=12) (actual 
time=0.173..0.173 rows=6 loops=1)"
"  ->  Seq Scan on jednostka_miary  (cost=0.00..1.06 
rows=6 width=12) (actual time=0.117..0.140 rows=6 loops=1)"
"SubPlan"
"  ->  Aggregate  (cost=2722.71..2722.72 rows=1 width=14) (actual 
time=78.006..78.010 rows=1 loops=7881)"
"->  Nested Loop  (cost=64.33..2722.28 rows=171 width=14) 
(actual time=73.991..77.930 rows=6 loops=7881)"
"  ->  Hash Join  (cost=64.33..602.79 rows=368 width=12) 
(actual time=3.098..64.518 rows=627 loops=7881)"
"Hash Cond: ("outer".id_zlecenia = 
"inner".id_zlecenia)"
"->  Seq Scan on zlecenia_elementy  
(cost=0.00..488.85 rows=9185 width=20) (actual time=0.009..32.216 rows=9185 
loops=7881)"
"->  Hash  (cost=63.98..63.98 rows=140 width=8) 
(actual time=4.849..4.849 rows=195 loops=1)"
"  ->  Bitmap Heap Scan on zlecenia  
(cost=6.50..63.98 rows=140 width=8) (actual time=0.721..3.772 rows=195 loops=1)"
"Recheck Cond: ((id_paczka = 52) OR 
(id_paczka = 50) OR (id_paczka = 53))"
"->  BitmapOr  (cost=6.50..6.50 
rows=142 width=0) (actual time=0.549..0.549 rows=0 loops=1)"
"  ->  Bitmap Index Scan on 
zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0) (actual time=0.427..0.427 
rows=73 loops=1)"
"Index Cond: (id_paczka = 
52)"
"  ->  Bitmap Index Scan on 
zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0) (actual time=0.059..0.059 
rows=49 loops=1)"
"Index Cond: (id_paczka = 
50)"
"  ->  Bitmap Index Scan on 
zlecenia_id_paczka  (cost=0.00..2.17 rows=47 width=0) (actual time=0.039..0.039 
rows=73 loops=1)"
"Index Cond: (id_paczka = 
53)"
"  ->  Index Scan using zlezenia_towar_elementy_towar on 
zlecenia_towar  (cost=0.00..5.75 rows=1 width=18) (actual time=0.015..0.015 
rows=0 loops=4941387)"
"Index Cond: ((zlecenia_towar.id_zlecenia_elementy 
= "outer".id_zlecenia_elementy) AND (zlecenia_towar.id_towar = $0))"
"Total runtime: 615962.759 ms"




\d towar
 max3   | smallint   | default 0
 max4   | smallint   | default 0
 typik  | character varying(1)   | default '_'::character varying
 id_grupa_rabatowa  | integer| not null default 0
 id_jednostka_miary | integer| not null default 0
 id_vat | integer| not null default 0
 id_typ_towaru  | integer| not null default 0
 id_dos

[SQL] Function Parameters in GROUP BY clause cause errors

2006-03-22 Thread Davidson, Robert
Title: Function Parameters in GROUP BY clause cause errors






When I use a parameter in a query which aggregates it fails with a GROUP BY error. What syntax can I use to avoid this error?

CREATE TABLE test (email_creation_datetime timestamp);

INSERT INTO test VALUES ('2006-03-20 09:00');

INSERT INTO test VALUES ('2006-03-20 09:15');

INSERT INTO test VALUES ('2006-03-20 09:30');

INSERT INTO test VALUES ('2006-03-20 09:45');

Query without parameters works fine:

select to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/30) * 30, 'HH24:MI'), 'HH24:MI') as TheInterval

from test em

group by  to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/30) * 30, 'HH24:MI') , 'HH24:MI')

theinterval

09:30

09:00

But the same query with a parameter returns a GROUP BY error:

CREATE OR REPLACE FUNCTION EmailByInterval(IntervalMinutes int) RETURNS SETOF test AS $$

DECLARE rec RECORD;

BEGIN

FOR rec IN 

    select to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/IntervalMinutes) * IntervalMinutes, 'HH24:MI'), 'HH24:MI') as TheInterval

    from test em

    group by  to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/IntervalMinutes) * IntervalMinutes, 'HH24:MI') , 'HH24:MI')

LOOP

    RETURN NEXT rec;

END LOOP;

RETURN;

END;

$$ LANGUAGE plpgsql;

Query returned successfully with no result in 70 ms.

select * from emailbyinterval(30);

ERROR:  column "em.email_creation_datetime" must appear in the GROUP BY clause or be used in an aggregate function

CONTEXT:  SQL statement " select to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/ $1 ) *  $2 , 'HH24:MI'), 'HH24:MI') as TheInterval from test em group by to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/ $3 ) *  $4 , 'HH24:MI') , 'HH24:MI')"

PL/pgSQL function "emailbyinterval" line 3 at for over select rows




[SQL] Custom type

2006-03-22 Thread Daniel Caune
Hi,

How can I enter description for my custom types?

\dT provides information such as schema, name, and description for all
the registered types and custom types.  I would like to provide a
description for each custom type I create.

Thanks,


--
Daniel CAUNE
Ubisoft Online Technology
(514) 4090 2040 ext. 5418


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Custom type

2006-03-22 Thread Terry Lee Tucker

On Wednesday 22 March 2006 03:25 pm, Daniel Caune saith:
> Hi,
>
> How can I enter description for my custom types?
>
> \dT provides information such as schema, name, and description for all
> the registered types and custom types.  I would like to provide a
> description for each custom type I create.
>
> Thanks,
>
>
> --
> Daniel CAUNE
> Ubisoft Online Technology
> (514) 4090 2040 ext. 5418
>

Daniel,

From the \h command in psql:

rnd=# \h comment
Command: COMMENT
Description: define or change the comment of an object
Syntax:
COMMENT ON
{
  TABLE object_name |
  COLUMN table_name.column_name |
  AGGREGATE agg_name (agg_type) |
  CONSTRAINT constraint_name ON table_name |
  DATABASE object_name |
  DOMAIN object_name |
  FUNCTION func_name (arg1_type, arg2_type, ...) |
  INDEX object_name |
  OPERATOR op (leftoperand_type, rightoperand_type) |
  RULE rule_name ON table_name |
  SCHEMA object_name |
  SEQUENCE object_name |
  TRIGGER trigger_name ON table_name |
  TYPE object_name |
  VIEW object_name
} IS 'text'

I believe this is what you need.

HTH.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Custom type

2006-03-22 Thread Bryce Nesbitt
Terry Lee Tucker wrote:
> rnd=# \h comment
> Command: COMMENT
> Description: define or change the comment of an object
> ..I believe this is what you need.
>   
Cool!
That's a great feature.  Though it would be even nicer if the comment
showed when
you "\d" a table::


stage=# comment on table db_version is 'Managed in nautilus.xml';
stage=# \d db_version

  Table "public.db_version"
 Column  | Type  | Modifiers
-+---+---
 version | character varying(64) |


stage=# \dd db_version

  Object descriptions
 Schema |Name| Object |   Description
+++-
 public | db_version | table  | Managed in nautilus.xml


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Custom type

2006-03-22 Thread Daniel Caune


> > Hi,
> >
> > How can I enter description for my custom types?
> >
> > \dT provides information such as schema, name, and description for
all
> > the registered types and custom types.  I would like to provide a
> > description for each custom type I create.
> >
> > Thanks,
> >
> >
> > --
> > Daniel CAUNE
> > Ubisoft Online Technology
> > (514) 4090 2040 ext. 5418
> >
> 
> Daniel,
> 
> From the \h command in psql:
> 
> rnd=# \h comment
> Command: COMMENT
> Description: define or change the comment of an object
> Syntax:
> COMMENT ON
> {
>   TABLE object_name |
>   COLUMN table_name.column_name |
>   AGGREGATE agg_name (agg_type) |
>   CONSTRAINT constraint_name ON table_name |
>   DATABASE object_name |
>   DOMAIN object_name |
>   FUNCTION func_name (arg1_type, arg2_type, ...) |
>   INDEX object_name |
>   OPERATOR op (leftoperand_type, rightoperand_type) |
>   RULE rule_name ON table_name |
>   SCHEMA object_name |
>   SEQUENCE object_name |
>   TRIGGER trigger_name ON table_name |
>   TYPE object_name |
>   VIEW object_name
> } IS 'text'
> 
> I believe this is what you need.
> 
> HTH.
> 

I see!  I was searching an option in the custom type creation statement,
something like:

  CREATE TYPE foo (
...
  )
  DESCRIPTION "something that might be useful";

Thanks for this information!

--
Daniel

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Function Parameters in GROUP BY clause cause errors

2006-03-22 Thread Tom Lane
"Davidson, Robert" <[EMAIL PROTECTED]> writes:
> ERROR:  column "em.email_creation_datetime" must appear in the GROUP BY =
> clause or be used in an aggregate function
> CONTEXT:  SQL statement " select to_char(to_timestamp(EXTRACT(HOUR FROM =
> em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM =
> em.email_creation_datetime)::integer/ $1 ) *  $2 , 'HH24:MI'), =
> 'HH24:MI') as TheInterval from test em group by =
> to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || =
> ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/ $3 ) * =
>  $4 , 'HH24:MI') , 'HH24:MI')"
> PL/pgSQL function "emailbyinterval" line 3 at for over select rows

Hmm, this seems like a plpgsql deficiency.  It feels it can generate a
separate parameter symbol ($n) for each occurrence of each variable it
passes into a SQL query.  But for this query to be legal, the two
instances of IntervalMinutes have to be represented by the *same*
parameter symbol (notice they are not in the regurgitated query).

It would be more efficient anyway to not generate multiple parameters
for the same value, so we oughta fix this.

In the short run, the only workaround I can think of for you is to run
the query using EXECUTE.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] OUT parameter

2006-03-22 Thread Daniel Caune
Hi,

Is there any suggestion against using OUT parameter for local
calculation such as using a local variable?

CREATE OR REPLACE FUNCTION foo(a IN int,
   b1 OUT int,
   b2 OUT int)
AS $$
BEGIN
  FOR (...) LOOP
b1 = (...);
b2 = (...);
  END LOOP;
END;
$$ LANGUAGE PLPGSQL;

or for some reasons (performance or whatever other details of
implementation), would it be preferable to use local variable and to
initialize the OUT parameters at the end?

CREATE OR REPLACE FUNCTION foo(a IN int,
   b1 OUT int,
   b2 OUT int)
AS $$
  V_b1 int;
  V_b2 int;
BEGIN
  FOR (...) LOOP
V_b1 = (...);
V_b2 = (...);
  END LOOP;

  b1 = V_b1;
  b2 = V_b2;
END;
$$ LANGUAGE PLPGSQL;

Thanks,


--
Daniel CAUNE
Ubisoft Online Technology
(514) 4090 2040 ext. 5418


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] OUT parameter

2006-03-22 Thread Owen Jacobson
Daniel Caune wrote:

> Is there any suggestion against using OUT parameter for local
> calculation such as using a local variable?
> 
> CREATE OR REPLACE FUNCTION foo(a IN int,
>b1 OUT int,
>b2 OUT int)
> AS $$
> BEGIN
>   FOR (...) LOOP
> b1 = (...);
> b2 = (...);
>   END LOOP;
> END;
> $$ LANGUAGE PLPGSQL;

I'd say there's no problem with this, PROVIDED you can ensure you'll never 
abort before completing the computation.  It's not a good idea to modify out 
parameters partway; programmers (myself included) have this nasty habit of 
assuming, rightly or wrongly, that a failed function call won't have destroyed 
the parameters.

If you can't ensure you'll always complete, use locals.

-Owen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Function Parameters in GROUP BY clause cause errors

2006-03-22 Thread Christian Paul B. Cosinas
Title: Function Parameters in GROUP BY clause cause errors








Just Put aggregate function to the fields
you selected.

Like this:

 

select
to_char(to_timestamp(EXTRACT(HOUR FROM max(em.email_creation_datetime))
|| ':' || (EXTRACT(MINUTE FROM max(em.email_creation_datetime))::integer/30)
* 30, 'HH24:MI'), 'HH24:MI') as TheInterval

from
test em

group
by  to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) ||
':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/30) * 30,
'HH24:MI') , 'HH24:MI')

 

 









From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Davidson, Robert
Sent: Wednesday, March 22, 2006
1:12 PM
To: [email protected]
Subject: [SQL] Function Parameters
in GROUP BY clause cause errors



 

When
I use a parameter in a query which aggregates it fails with a GROUP BY error.
What syntax can I use to avoid this error?

CREATE
TABLE test (email_creation_datetime timestamp);

INSERT
INTO test VALUES ('2006-03-20 09:00');

INSERT
INTO test VALUES ('2006-03-20 09:15');

INSERT
INTO test VALUES ('2006-03-20 09:30');

INSERT
INTO test VALUES ('2006-03-20 09:45');

Query
without parameters works fine:

select
to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) || ':' ||
(EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/30) * 30, 'HH24:MI'),
'HH24:MI') as TheInterval

from
test em

group
by  to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime) ||
':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/30) * 30,
'HH24:MI') , 'HH24:MI')

theinterval

09:30

09:00

But
the same query with a parameter returns a GROUP BY error:

CREATE
OR REPLACE FUNCTION EmailByInterval(IntervalMinutes int) RETURNS SETOF test AS
$$

DECLARE
rec RECORD;

BEGIN

FOR
rec IN 

   
select to_char(to_timestamp(EXTRACT(HOUR FROM
em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM
em.email_creation_datetime)::integer/IntervalMinutes) * IntervalMinutes,
'HH24:MI'), 'HH24:MI') as TheInterval

   
from test em

   
group by  to_char(to_timestamp(EXTRACT(HOUR FROM
em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM
em.email_creation_datetime)::integer/IntervalMinutes) * IntervalMinutes,
'HH24:MI') , 'HH24:MI')

LOOP

   
RETURN NEXT rec;

END
LOOP;

RETURN;

END;

$$
LANGUAGE plpgsql;

Query
returned successfully with no result in 70 ms.

select
* from emailbyinterval(30);

ERROR: 
column "em.email_creation_datetime" must appear in the GROUP BY
clause or be used in an aggregate function

CONTEXT: 
SQL statement " select to_char(to_timestamp(EXTRACT(HOUR FROM
em.email_creation_datetime) || ':' || (EXTRACT(MINUTE FROM
em.email_creation_datetime)::integer/ $1 ) *  $2 , 'HH24:MI'), 'HH24:MI')
as TheInterval from test em group by to_char(to_timestamp(EXTRACT(HOUR FROM em.email_creation_datetime)
|| ':' || (EXTRACT(MINUTE FROM em.email_creation_datetime)::integer/ $3 )
*  $4 , 'HH24:MI') , 'HH24:MI')"

PL/pgSQL
function "emailbyinterval" line 3 at for over select rows





I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html




Re: [SQL] OUT parameter

2006-03-22 Thread Tom Lane
> Daniel Caune wrote:
>> Is there any suggestion against using OUT parameter for local
>> calculation such as using a local variable?

In plpgsql (at least in the current implementation) an OUT parameter is
pretty much just a local variable, and so there's no efficiency argument
against using it as a temporary.  Whether you consider this good style
is a matter of opinion.

"Owen Jacobson" <[EMAIL PROTECTED]> writes:
> I'd say there's no problem with this, PROVIDED you can ensure you'll
> never abort before completing the computation.

Not really an issue in Postgres: we do not support pass-by-reference
parameters and are unlikely to start doing so.  There isn't any way
that you can affect locals of a calling procedure before you return.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Function Parameters in GROUP BY clause cause errors

2006-03-22 Thread Tom Lane
I wrote:
> Hmm, this seems like a plpgsql deficiency.  It feels it can generate a
> separate parameter symbol ($n) for each occurrence of each variable it
> passes into a SQL query.  But for this query to be legal, the two
> instances of IntervalMinutes have to be represented by the *same*
> parameter symbol (notice they are not in the regurgitated query).

> It would be more efficient anyway to not generate multiple parameters
> for the same value, so we oughta fix this.

Patch applied to HEAD and 8.1 branches.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend