Re: [SQL] rowtype and ecpg

2002-06-24 Thread Christoph Haller

What exactly is your problem? 
Calling a plpgsql function from within a C program 
or retrieving varchar data within a C program? 
Regards, Christoph 

> 
> I have a function that takes as a parameter ROWTYPE:
> 
> create or replace function test_func(test_table) 
> returns varchar as '
> declare
>lv_return   varchar;
> begin
>..
>return lv_return;
> end;
> ' LANGUAGE 'plpgsql';
> 
> How do I call this function from the C program (ecpg)? How my
> declaration should look like?
> 
> I trued structure and got error: Too many arguments
> 
> Thanks for your help




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [SQL] Select + min question

2002-06-24 Thread Devrim GUNDUZ


Hi,

On Sun, 23 Jun 2002, Christopher Kings-Lynne wrote:
> 
> Use a subselect (and don't compare to 't' if it's a boolean field...)
> 
> SELECT b_date FROM test WHERE active AND id=(SELECT min(id) FROM test) LIMIT
> 1;

It'a a shame for me that I could not think of using subselects :-(

Anyway, thanks.

Best regards.

-- 

Devrim GUNDUZ

[EMAIL PROTECTED]
[EMAIL PROTECTED]

Web : http://devrim.oper.metu.edu.tr
-





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





[SQL] variance aggregate function incorrect? Reference Materials regcreate aggregate

2002-06-24 Thread Joseph Syjuco

hi
i needed the variance function ... i dont know if i introduced the wrong
parameters or maybe this variance is not the variance that im looking
for but it doesnt provide the right results

variance=(nEx^2 - (Ex)^2)/(n(n-1))
my sql statement
select variance(answer) from tbl_answer (where answer is of type
integer)

because of this i need to create my own variance aggregate (unless
someone enlightens me on whats happening with the variance aggregate) so
i would like to know if theres anyone who knows a good create aggregate
tutorial (with samples) ... all i got from google so far is the
reference from ninthwonder.com
TIA







---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [SQL] slow DELETE queries

2002-06-24 Thread Denis

On Thursday 20 June 2002 05:01 pm, you wrote:
> On Thu, 20 Jun 2002 15:23:53 +0200, Denis <[EMAIL PROTECTED]> wrote:
> >I traced the queries slowing it all down to this snippet in the debug log:
> >DELETE FROM phpbb_search_wordlist WHERE word_id IN (
> >SELECT word_id FROM phpbb_search_wordmatch WHERE word_id IN (
> >SELECT word_id FROM phpbb_search_wordmatch WHERE post_id IN (70535)
> >GROUP BY word_id) GROUP BY word_id HAVING COUNT(word_id) = 1)

Hi Manfred,

Thanks for your response.

> IN is known to be problematic;  try to use EXISTS or =, wherever
> possible.  Can you rewrite your innermost where clause to WHERE
> post_id = 70535?

I will try out patching the DB abstraction layer to use this if indeed the 
query allows it. (I am unsure whether this query will sometimes have more IDs 
inside the IN. 

> Also create an index on phpbb_search_wordmatch.post_id.

I will try this too ! 

> If it's still too slow, give us some more information:
> Is word_id unique in phpbb_search_wordlist?
> Is (post_id, word_id) unique in phpbb_search_wordmatch?
> How many rows are in your tables?

Here is the information :
word_id is indeed unique in phpbb_search_wordlist. 
(post_id, word_id) should be unique in phpbb_search_wordmatch if the 
application is correctly written. It is supposed to serve as a relational 
lookup table for resolving search queries into a list of words with matching 
words inside.

Rows in related tables :
phpbb_search_wordmatch : 2907191
phpbb_search_wordlist : 118306
phpbb_posts : 70953

VACUUM, VACUUM ANALYZE is performed nightly, maybe this would help to do more 
often ? 

-- 
Denis Braekhus - ABC Startsiden AS
http://www.startsiden.no



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





[SQL] assign count() result to a declared variable in plpgsql

2002-06-24 Thread Joseph Syjuco


i want to put my count() result in a plpgsql declared integer variable


declare f_count_var integer;
begin
select into f_count_var count(empno) from employee
end;

tried this one but it doesnt work










---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





[SQL] index problem

2002-06-24 Thread Oleg Lebedev
Title: Message



Hi,
I have an objectid 
field of type bigint, but when I run queries like:
select * from table 
where objectid=123;
index is not used on 
objectid even though it's declared.
However when I 
run:

select * from table 
where objectid='123';
index is 
used.
Is there an 
option I can set so that postgres starts using indexes on numberic field 
without requiring
conversion to 
varchar?
thanks,
 
Oleg


[SQL] Slow SELECT with distinct, in a TIMESTAMP type column

2002-06-24 Thread Marcos Garcia

Hi,


I've a table "send_total" like this:

  Table "send_total"
  Column   |   Type   |  
Modifiers   
---+--+---
 id| integer  | not null default
nextval('"send_total_id_seq"'::text)
 smsdate   | timestamp with time zone | 
 idclient  | integer  | not null
 recv  | integer  | default 0
 send  | integer  | default 0
 recv_conf | integer  | default 0
 send_conf | integer  | default 0
 conf0 | integer  | default 0
 conf1 | integer  | default 0
 conf2 | integer  | default 0
 conf3 | integer  | default 0
 conf4 | integer  | default 0
 conf5 | integer  | default 0
 conf6 | integer  | default 0
 conf7 | integer  | default 0
Indexes: send_total_idclient_idx,
 send_total_smsdate_idx
Primary key: send_total_pkey
Unique keys: send_total_idclismsdate
(idclient and smsdate must be unique)


The problem is that my query is too slow when i use distinct:

pgsql> explain analyze SELECT distinct(smsdate) FROM send_total;

NOTICE:  QUERY PLAN:

Unique  (cost=15840.31..15870.81 rows=1220 width=8) (actual
time=56358.93..56452.78 rows=6670 loops=1)
  ->  Sort  (cost=15840.31..15840.31 rows=12203 width=8) (actual
time=56358.91..56436.95 rows=12062 loops=1)
->  Seq Scan on send_total  (cost=0.00..15012.03 rows=12203
width=8) (actual time=0.55..56205.25 rows=12062 loops=1)
Total runtime: 56518.26 msec


This query is becoming slower and slower, day by day.
What i have to do to optimize my query or database??? I've this database
working since January 2002.

I've tried to make the following command, but without success:

$ /usr/bin/vacuumdb -z -d dbname -t send_total



Thanks in advance,


M.P.Garcia





-- 
M.P.Garcia
PT Inovação, SA
Serviços e Redes Móveis 
Rua José Ferreira Pinto Basto - 3810 Aveiro
Tel: 234 403 253  -  Fax: 234 424 160
E-mail: [EMAIL PROTECTED]



signature.asc
Description: This is a digitally signed message part


Re: [SQL] variance aggregate function incorrect? Reference Materials reg create aggregate

2002-06-24 Thread Tom Lane

Joseph Syjuco <[EMAIL PROTECTED]> writes:
> i needed the variance function ... i dont know if i introduced the wrong
> parameters or maybe this variance is not the variance that im looking
> for but it doesnt provide the right results

> variance=(nEx^2 - (Ex)^2)/(n(n-1))

That's what I'd expect it to do, all right.  Please define "doesnt
provide the right results".

regards, tom lane



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])





Re: [SQL] variance aggregate function incorrect? Reference Materials

2002-06-24 Thread Bruce Momjian

Joseph Syjuco wrote:
> hi
> i needed the variance function ... i dont know if i introduced the wrong
> parameters or maybe this variance is not the variance that im looking
> for but it doesnt provide the right results
> 
> variance=(nEx^2 - (Ex)^2)/(n(n-1))
> my sql statement
> select variance(answer) from tbl_answer (where answer is of type
> integer)
> 
> because of this i need to create my own variance aggregate (unless
> someone enlightens me on whats happening with the variance aggregate) so
> i would like to know if theres anyone who knows a good create aggregate
> tutorial (with samples) ... all i got from google so far is the
> reference from ninthwonder.com

Uh, I tried this:

test=> select variance(relnatts) from pg_class;
   variance
---
 24.4259169884
(1 row)

Not sure if it is the right number or not.  Have you tried taking the
square root and see if that number is correct.

The docs say:

   The variance is the square of the standard deviation.  The
   supported data types and result types are the same as for
   standard deviation.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [SQL] assign count() result to a declared variable in plpgsql

2002-06-24 Thread Jeff Eckermann

try:
f_count_var := count(empno) from employee;

--- Joseph Syjuco <[EMAIL PROTECTED]> wrote:
> 
> i want to put my count() result in a plpgsql
> declared integer variable
> 
> 
> declare f_count_var integer;
> begin
>   select into f_count_var count(empno) from employee
> end;
> 
> tried this one but it doesnt work
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 3: if posting/reading through Usenet, please
> send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED]
> so that your
> message can get through to the mailing list cleanly
> 
> 


__
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [SQL] Slow SELECT with distinct, in a TIMESTAMP type column

2002-06-24 Thread Stephan Szabo


On 24 Jun 2002, Marcos Garcia wrote:

> The problem is that my query is too slow when i use distinct:
>
> pgsql> explain analyze SELECT distinct(smsdate) FROM send_total;
>
> NOTICE:  QUERY PLAN:
>
> Unique  (cost=15840.31..15870.81 rows=1220 width=8) (actual
> time=56358.93..56452.78 rows=6670 loops=1)
>   ->  Sort  (cost=15840.31..15840.31 rows=12203 width=8) (actual
> time=56358.91..56436.95 rows=12062 loops=1)
> ->  Seq Scan on send_total  (cost=0.00..15012.03 rows=12203
> width=8) (actual time=0.55..56205.25 rows=12062 loops=1)
> Total runtime: 56518.26 msec

As a question, what does explain analyze show if you
set enable_seqscan=off; first?





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [SQL] Slow SELECT with distinct, in a TIMESTAMP type column

2002-06-24 Thread Stephan Szabo

On 24 Jun 2002, Marcos Garcia wrote:

>
> When i make the set enable_seqscan = off the result is:
>
> dbname=# set enable_seqscan = off;
> SET VARIABLE
> dbname=# explain analyze SELECT distinct(smsdate) FROM send_total;
>
> NOTICE:  QUERY PLAN:
>
> Unique  (cost=0.00..31084.39 rows=1232 width=8) (actual
> time=108.60..195210.91 rows=6676 loops=1)
>   ->  Index Scan using send_total_smsdate_idx on send_total
> (cost=0.00..31053.60 rows=12315 width=8) (actual time=108.59..195145.43
> rows=12087 loops=1)
> Total runtime: 195226.24 msec

Well the plan choice makes sense (since it's about 3.5x faster as
a sequence scan over the index scan).
It looks like most of the time is taken up by the actual scan (is
"select smsdate from send_total" pretty much as slow?).




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [SQL] Slow SELECT with distinct, in a TIMESTAMP type column

2002-06-24 Thread Marcos Garcia


When i make the set enable_seqscan = off the result is:

dbname=# set enable_seqscan = off;
SET VARIABLE
dbname=# explain analyze SELECT distinct(smsdate) FROM send_total;

NOTICE:  QUERY PLAN:

Unique  (cost=0.00..31084.39 rows=1232 width=8) (actual
time=108.60..195210.91 rows=6676 loops=1)
  ->  Index Scan using send_total_smsdate_idx on send_total 
(cost=0.00..31053.60 rows=12315 width=8) (actual time=108.59..195145.43
rows=12087 loops=1)
Total runtime: 195226.24 msec




On Mon, 2002-06-24 at 19:36, Stephan Szabo wrote:
> 
> On 24 Jun 2002, Marcos Garcia wrote:
> 
> > The problem is that my query is too slow when i use distinct:
> >
> > pgsql> explain analyze SELECT distinct(smsdate) FROM send_total;
> >
> > NOTICE:  QUERY PLAN:
> >
> > Unique  (cost=15840.31..15870.81 rows=1220 width=8) (actual
> > time=56358.93..56452.78 rows=6670 loops=1)
> >   ->  Sort  (cost=15840.31..15840.31 rows=12203 width=8) (actual
> > time=56358.91..56436.95 rows=12062 loops=1)
> > ->  Seq Scan on send_total  (cost=0.00..15012.03 rows=12203
> > width=8) (actual time=0.55..56205.25 rows=12062 loops=1)
> > Total runtime: 56518.26 msec
> 
> As a question, what does explain analyze show if you
> set enable_seqscan=off; first?
> 
-- 
M.P.Garcia
PT Inovação, SA
Serviços e Redes Móveis 
Rua José Ferreira Pinto Basto - 3810 Aveiro
Tel: 234 403 253  -  Fax: 234 424 160
E-mail: [EMAIL PROTECTED]



signature.asc
Description: This is a digitally signed message part


Re: [SQL] Can somebody help me to optimize this huge query?

2002-06-24 Thread Josh Berkus


Dirk,

> SELECT DISTINCT
>   t_sek.docindex,
>   t_sek.envelope,
>   bt.oid,
>   bt.time
> FROM
>   boxinfo bt, boxinfo bd, boxinfo bo,
>   docobj t_sek, docobj t_pgr, docobj t_sta, docobj t_sol,
>   docobj d_pnr, docobj d_sta,
>   docobj o_sek, docobj o_pgr, docobj o_pnr
> WHERE
>   t_sek.docspec=124999684 and
>   t_pgr.docspec=124999684 and
>   t_sol.docspec=124999684 and
>   t_sta.docspec=124999684 and


Well, from the look of things, you have no problem with indexing ... the 
planner is using your indexes for everything.   How long is it taking to 
return a response?

All of those nested loops do give me the impression that you *might* be able 
to improve performance by forcing the planner using explicit joins and even 
subselects.  This is quite an art form; I can't really give you specifics on 
it, but the idea is to use your knowledge of the database to reduce the size 
of each hash join before it's formed.  The basic approach is to join tables 
small to large order. 

However, with an average 3-column primary key, I'm not certain that this is 
practical.  I'm also not certain that the query planner is your bottleneck; 
that EXPLAIN plan looks pretty good to me.

Also, have a look at your computer's memory, disk i/o and swap memory 
activity.   If your machine is being forced to use the swap for query 
storage, that's going to slow you down a lot.  A good utility for this on 
linux is memstat.   Then you can play with postgres' sort_mem and buffer 
settings to try to make the whole thing happen in RAM.

FInally, how big are these tables?  If we're talking 200mb of data, and you're 
using IDE drives, you'll need a hardware upgrade -- like a UW SCSI RAID 
array.  DIsk I/O has a *big* impact on database efficiency.

-- 
-Josh Berkus

__AGLIO DATABASE SOLUTIONS___
Josh Berkus
   Complete information technology  [EMAIL PROTECTED]
and data management solutions   (415) 565-7293
   for law firms, small businesses   fax 621-2533
and non-profit organizations.   San Francisco




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [SQL] Request for builtin function: Double_quote

2002-06-24 Thread Tom Lane

"Josh Berkus" <[EMAIL PROTECTED]> writes:
> Well, first off, quote_literal isn't in the documentation under
> "Functions and Operators".So this is the first I've heard about it
> -- or probably anyone else outside the core team.   How long has it
> been around?

Awhile; however, the only documentation was in the discussion of EXECUTE
in the pl/pgsql chapter of the Programmer's Guide, which is probably not
the best place.

> Therefore, I withdraw my initial request, and request instead that
> quote_literal be added to the function documentation in String
> Functions and Operators.

Done; I also added its sister function quote_ident.  See the devel
docs at
http://candle.pha.pa.us/main/writings/pgsql/sgml/functions-string.html

regards, tom lane



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [SQL] Request for builtin function: Double_quote

2002-06-24 Thread Josh Berkus


Tom,

> Done; I also added its sister function quote_ident.  See the devel
> docs at
> http://candle.pha.pa.us/main/writings/pgsql/sgml/functions-string.html

Tante Grazie.

-- 
-Josh Berkus




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [SQL] assign count() result to a declared variable in plpgsql

2002-06-24 Thread Tom Lane

Joseph Syjuco <[EMAIL PROTECTED]> writes:
> i want to put my count() result in a plpgsql declared integer variable
> declare f_count_var integer;
> begin
>   select into f_count_var count(empno) from employee
> end;
> tried this one but it doesnt work

Works for me:

regression=# create function foo() returns int as '
regression'# declare f_count_var integer;
regression'# begin
regression'# select into f_count_var count(unique1) from tenk1;
regression'# return f_count_var;
regression'# end' language plpgsql;
CREATE FUNCTION
regression=# select foo();
  foo
---
 1
(1 row)


What PG version are you using?

regards, tom lane



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





[SQL] altering column width

2002-06-24 Thread tdn

Hi there,

How do I increase the width of a column in a table.
Can I do it without having  to shut down the database?

Rgds
TDN




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly