Re: [SQL] Function unkown

2003-01-20 Thread Tomasz Myrta
Pedro Igor wrote:


How is this function ?

plpgsql_call_handler() RETURNS language_handler AS 
'plpgsql_call_handler' LANGUAGE 'c' VOLATILE;

I think it is just pl/pgsql handler, which is created depending on your version
after creation of database or after "createlang plpgsql "

Regards,
Tomasz Myrta



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] pg_dump problem

2003-01-20 Thread Bhuvan A
> mv camper.dump20020116 camper_bak/
> 
> The error I get is ::
> 
> mv: camper.dump20020116: Value too large for defined data type
> 

It may not be a postgresql problem. It may be due to your filesystem. Try 
the documentation or mailing list of your filesystem.

regards,
bhuvaneswaran



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Implementing automatic updating of primary keys...

2003-01-20 Thread Bhuvan A
> 
> there can be many  such tables that have foreign key id which is referencing
> the master table test_master column "id". what we want is when some ids become 
   ^^^
It should be profile_master, right?

> redundant we have to merge two ids , we want that thru out the system the 
> occurance of the old primary key shud be removed and replaced with the new id.
> 

Definitely it can be done in the database level. Trigger a function before
delete on the primary table (profile_master) and update the foreign tables
with appropriate id and so the redundant id can be deleted from the
primary table. 

regards,
bhuvaneswaran



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] performance question

2003-01-20 Thread Moritz Lennert
Hello,

I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
MB RAM.
Some queries I launch take quite a long time, and I'm wondering whether
this is normal,or whether I can get better performance somehow.

As an example, I have a field which is in char(2), with 23 different
possible values. When I launch a "select * from table where field = 'xx'"
this takes a very long time (some 230194.10 msec, i.e. almost 4 minutes,
according to "explain analyze"). I've tried indexing on this column and
vacuuming ("vacuum analyze"), but this doesn't change anything. "explain
select" always gives me a sequential scan. The correlation value in
pg_stats for this column is 0.0583268, so a seq scan is probably cheaper
than index scan.

Now maybe this is just the way it is and there is no possibility to
enhance performance, but if someone has some tips on what I might try,
they are very welcome !
One question I asked myself is whether the use of char(2) is the best
option. The column (and most others in the table) contains codes that
designate different characteristics (for ex. in a column 'sex' one would
find '1'=male, '2'=female).

Moritz



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Unique indexes not unique?

2003-01-20 Thread Jimmy Mäkelä
> From: Tom Lane [mailto:[EMAIL PROTECTED]]
> > Limit  (cost=22669.68..22669.68 rows=95 width=372)
> >   ->  Sort  (cost=22669.68..22669.68 rows=96 width=372)
> > ->  Index Scan using agentresults2_modified_user,
> > agentresults2_modified_user, agentresults2_modified_user on 
> agentresults
> > (cost=0.00..22666.52 rows=96 width=372)
> 
> Should I guess from the index name that it is on (modified, 
> usr) and not
> on (usr, modified)?  If so, the problem is that the OR-expansion code

Sorry for the late answer, was out of town for the week.
Yes this is the case, and in my case I wouldn't want to change the order.

Sure it would be nice to support this case too, but not if it implies
penalties for more typical queries. Doing the expansion manually isn't
that hard (but quite ugly).

Thanks for all answers.

Regards,
Jimmy Mäkelä

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] performance question

2003-01-20 Thread Manfred Koizar
On Mon, 20 Jan 2003 12:40:34 +0100 (CET), "Moritz Lennert"
<[EMAIL PROTECTED]> wrote:
>I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
>MB RAM.
>Some queries I launch take quite a long time, and I'm wondering whether
>this is normal,or whether I can get better performance somehow.

Moritz, we need more information.  Please show us
 . your PG version
 . CREATE TABLE ...
 . indices
 . your query
 . EXPLAIN ANALYZE output
 . your settings, especially shared_buffers, sort_mem,
random_page_cost, effective_cache_size

>One question I asked myself is whether the use of char(2) is the best
>option. The column (and most others in the table) contains codes that
>designate different characteristics (for ex. in a column 'sex' one would
>find '1'=male, '2'=female).

char(2) needs 8 bytes, smallint only 2 bytes (unless followed by a
column with 4 or 8 byte alignment).  Instead of char(1) (8 bytes) you
might want to use the Postgres specific type "char" (with the double
quotes!) needing only 1 byte.

Servus
 Manfred

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] performance question

2003-01-20 Thread Moritz Lennert
> On Mon, 20 Jan 2003 12:40:34 +0100 (CET), "Moritz Lennert"
> <[EMAIL PROTECTED]> wrote:
>>I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
MB RAM.
>>Some queries I launch take quite a long time, and I'm wondering whether
this is normal,or whether I can get better performance somehow.
>
> Moritz, we need more information.  Please show us
>  . your PG version

 7.2.1-2woody2

>  . CREATE TABLE ...

CREATE TABLE "rec81" (
"commune_residence" character(5),
"sexe" character(1),
"annee_naissance" smallint,
"mois_naissance" smallint,
"jour_naissance" smallint,
"parent" character(2),
"etat_civil" character(1),
"nationalite" character(3),
"type_menage" character(1),
"logement_depuis_naiss" character(1),
"domicile_mere" character(6),
"dans_log_depuis_quand" smallint,
"meme_log_1980" character(1),
"commune_1980" character(6),
"annee_entree_belgique" smallint,
"age_fin_etude" smallint,
"detenteur_diplome" character(1),
"type_diplome" character(2),
"detenteur_diplome_etranger" character(1),
"actif" character(1),
"actif_temporaire" character(1),
"type_profession" character(4),
"statut_professionnel" character(1),
"temps_partiel" character(1),
"nombre_heures_travail" smallint,
"employeur" character(1),
"nombre_personnes_ds_services" integer,
"direction" character(1),
"lieu_travail" character(6),
"secteur_activite" character(3),
"emploi_complementaire" character(1),
"type_emploi_complementaire" character(4),
"lieu_depart_navette" character(1),
"commune_depart_navette" character(6),
"distance" smallint,
"nbre_navettes_par_jour" character(1),
"nbre_jours_navette_par_semaine" character(1),
"type_transport_navette" character(3),
"duree_trajet_navette" character(1),
"statut_non_occupe" character(2),
"effectif_menage" smallint,
"sec_stat_residence" character(6)
);

>  . indices

CREATE INDEX rec81_commune_residence_idx ON rec81 USING btree
(commune_residence);
CREATE INDEX rec81_annee_naissance_idx ON rec81 USING btree
(annee_naissance);
CREATE INDEX rec81_nationalite_idx ON rec81 USING btree (nationalite);
CREATE INDEX rec81_meme_log_1980_idx ON rec81 USING btree (meme_log_1980);
CREATE INDEX rec81_commune_1980_idx ON rec81 USING btree (commune_1980);
CREATE INDEX rec81_age_fin_etude_idx ON rec81 USING btree (age_fin_etude);
CREATE INDEX rec81_detenteur_diplome_idx ON rec81 USING btree
(detenteur_diplome);
CREATE INDEX rec81_type_profession_idx ON rec81 USING btree
(type_profession);
CREATE INDEX rec81_statut_professionnel_idx ON rec81 USING btree
(statut_professionnel);
CREATE INDEX rec81_lieu_travail_idx ON rec81 USING btree (lieu_travail);
CREATE INDEX rec81_secteur_activite_idx ON rec81 USING btree
(secteur_activite);
CREATE INDEX rec81_statut_non_occupe_idx ON rec81 USING btree
(statut_non_occupe);
CREATE INDEX rec81_sec_stat_residence_idx ON rec81 USING btree
(sec_stat_residence);
CREATE INDEX rec81_comres_typedipl_idx ON rec81 USING btree
commune_residence, type_diplome);
CREATE INDEX rec81_type_diplome_idx ON rec81 USING btree (type_diplome);

>  . your query

select commune_residence, type_diplome from rec81 where type_diplome = '11';

>  . EXPLAIN ANALYZE output

explain analyze select commune_residence, type_diplome from rec81 where
type_diplome = '11';
NOTICE:  QUERY PLAN:

Seq Scan on rec81  (cost=0.00..120316.30 rows=177698 width=15) (actual
time=23.03..219164.82 rows=176621 loops=1)
Total runtime: 226149.03 msec

EXPLAIN

>  . your settings, especially shared_buffers, sort_mem,
> random_page_cost, effective_cache_size


shared_buffers = 128

The others are not set (I suppose they should be set in the
postgresql.conf file ?)


Thanks for your help !

Moritz



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] performance question

2003-01-20 Thread Moritz Lennert
> On Mon, 20 Jan 2003 16:42:12 +0100 (CET), "Moritz Lennert"
> <[EMAIL PROTECTED]> wrote:
>>explain analyze select commune_residence, type_diplome from rec81 where
>>type_diplome = '11';
>>NOTICE:  QUERY PLAN:
>>
>>Seq Scan on rec81  (cost=0.00..120316.30 rows=177698 width=15) (actual
>>time=23.03..219164.82 rows=176621 loops=1)
>>Total runtime: 226149.03 msec
>
> Assumung that there are no NULLs, I guesstimate that 25 tuples should
> fit onto one page (could be more than 40 if you use smallint and
> "char").  That gives 88K pages for 2.2M tuples.  However,
> cost=0.00..120316.30  tells us that there are almost 120K pages.  Time
> to do a VACUUM FULL?
>

I'll try that, although I haven't changed any of the tuples since import
of the data (this is a static table...)

> From what I've seen I think that the planner is right to choose a seq
> scan.  226 seconds for reading 120K pages (~ 1GB) is not very
> impressive, though.  What kind of disk do you have?

IDE, Samsung, 7200rpm

>  Is your disk heavily fragmented?

It shouldn't be.

> Did you enable DMA?

No, should I ?

>  What else was running on
> your machine while you did that VACUUM ANALYZE?

Mozilla, maybe xterm with vi...

>>shared_buffers = 128
>
> It won't help much for this query we are discussing, but I would
> recommend setting shared_buffers to something in the range [1000,
> 4000].
>
> And one of my favorites: effective_cache_size = 4
>

I will have to increase /proc/sys/kernel/shmmax for that, or ?

Thanks again !

Moritz

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] performance question

2003-01-20 Thread Josh Berkus
Moritz,

There is a performance mailing list at:

[EMAIL PROTECTED]

--Josh

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] performance question

2003-01-20 Thread Manfred Koizar
On Mon, 20 Jan 2003 18:10:24 +0100 (CET), "Moritz Lennert"
<[EMAIL PROTECTED]> wrote:
>I'll try that, although I haven't changed any of the tuples since import
>of the data (this is a static table...)

Then I must have miscalculated something :-(  What does VACUUM VERBOSE
ANALYZE  say?

>> From what I've seen I think that the planner is right to choose a seq
>> scan.  226 seconds for reading 120K pages (~ 1GB) is not very
>> impressive, though.  What kind of disk do you have?
>
>IDE, Samsung, 7200rpm
>
>>  Is your disk heavily fragmented?
>
>It shouldn't be.
>
>> Did you enable DMA?
>
>No, should I ?

Yes.  Here is what I got on a P IV 2 GHz with a Seagate 7200rpm(?)
disk:

~ # hdparm -t -T /dev/hda

/dev/hda:
 Timing buffer-cache reads:   128 MB in  0.39 seconds =328.21 MB/sec
 Timing buffered disk reads:  64 MB in  2.49 seconds = 25.70 MB/sec

vs.

~ # hdparm -t -T /dev/hda

/dev/hda:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =345.95 MB/sec
 Timing buffered disk reads:  64 MB in 23.38 seconds =  2.74 MB/sec


~ # l xx
-rw-r--r--1 root root 1332104434 2003-01-20 19:04 xx
~ # time dd if=xx of=/dev/null bs=8k
162610+1 Records in
162610+1 Records out

real0m48.665s
user0m0.150s
sys 0m1.690s
~ # hdparm -d 0 /dev/hda
~ # time dd if=xx of=/dev/null bs=8k
162610+1 Records in
162610+1 Records out

real7m42.666s
user0m0.270s
sys 1m27.160s


With DMA: More than 3000 pages / second
Without DMA:   ~ 350 pages / second

Your throughput:   ~ 530 pages / second

>> recommend setting shared_buffers to something in the range [1000,
>> 4000].
>> And one of my favorites: effective_cache_size = 4
>
>I will have to increase /proc/sys/kernel/shmmax for that, or ?

Maybe for shared_buffers.  Certainly not for effective_cache_size.
The latter does not consume resources, it's just a hint for the
planner.

Servus
 Manfred

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Could someone help me fix my array_list function?

2003-01-20 Thread Guy Fraser
Hi

I am trying to write a function to step through an array and output each value 
as a set {list}, I think.

This is what I want to do:

select attribute,array_list(values,1,sizeof(values)) as value from av_list;

Turn :
 attr6 | {val3,val7,val4,val5}

Into :
 attr6 | val3
 attr6 | val7
 attr6 | val4
 attr6 | val5

Below I have included my functions, a test query, a table definition
and some sample data.

If anyone already has a function to do this I would be elated.

Note: text array_dims(array[]); function existed on the machine I started this 
on, but does not exist on my machine at home. It outputs a text value like 
'[1:1]' when there is only one item in the array and '[1:6]' when there is six 
items. My functions expect that function to exist.


Any help would be apreciated.

Guy

The entire selection below can be pasted to a shell, it will create a test 
database "testdb" add plpgsql to the database then create the functions, and a 
populated table before running a test query.

---%<...Cut Here...
createdb testdb
createlang plpgsql testdb
echo "
--###Start of Functions###
-- Array dimension functions.
--
-- Throw away old version of function
DROP FUNCTION array_diml(text[]);
--
-- Return the start 'left' dimension for the text array.
CREATE FUNCTION array_diml(text[])
	RETURNS int2
	AS
'select int2(ltrim(rtrim(rtrim(array_dims($1),\']012345679\'),\':\'),\'[\')) 
AS RESULT;'
	LANGUAGE sql
	WITH (iscachable,isstrict)
;
--
-- Throw away old version of function
DROP FUNCTION array_dimr(text[]);
--
-- Return the end 'right' dimension for the text array.
CREATE FUNCTION array_dimr(text[])
	RETURNS int2
	AS 'select 
int2(rtrim(ltrim(ltrim(array_dims($1),\'[012345679\'),\':\'),\']\')) AS RESULT;'
	LANGUAGE sql
	WITH (iscachable,isstrict)
;
--
-- Throw away old version of function
DROP FUNCTION array_list(text[],smallint);
--
-- Iterate array and post results
CREATE FUNCTION array_list(text[],smallint)
RETURNS SETOF text AS '
DECLARE
  inarray ALIAS FOR $1;
  dim ALIAS FOR $2;
BEGIN
  FOR counter IN 1..dim LOOP
RAISE NOTICE ''Getting element % of %'',counter,inarray;
  RETURN inarray[counter];
 END LOOP;
END;
'
LANGUAGE 'plpgsql';
--###End of Functions###

--###Start of test query###
--
-- Get a list with each destination for each mailbox
SELECT a_mailbox,
  array_list(a_destination,
array_dimr(a_destination))
FROM mail_aliases;
--###End of test query###

--###Start of table and sample data###
DROP TABLE mail_aliases;
CREATE TABLE mail_aliases(
  a_mailbox text,
  a_destination text[]
);
COPY mail_aliases FROM stdin USING DELIMITERS ':';
alias1:{dest1}
alias2:{dest2,dest1}
alias3:{dest3,dest4}
alias4:{dest3,dest4,dest5}
alias5:{dest6,dest7}
alias6:{dest3,dest7,dest4,dest5}
\.
--###End of table and sample data###

--###Start of test query###
--
-- Get a list with each destination for each mailbox
SELECT a_mailbox,
  array_list(a_destination,
array_dimr(a_destination))
FROM mail_aliases;
--###End of test query###
" | psql testdb
---%<...Cut Here...



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] performance question

2003-01-20 Thread Moritz Lennert
> Moritz,
>
> There is a performance mailing list at:
>
> [EMAIL PROTECTED]
>
> --Josh

I'm sorry, I didn't realize this, it is certainly closer to what I need.
Next time I'll look better (google brought up references to this list, so
I subscribed here).

Thanks for the hint,

Moritz


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] performance question

2003-01-20 Thread Tom Lane
"Moritz Lennert" <[EMAIL PROTECTED]> writes:
> One question I asked myself is whether the use of char(2) is the best
> option.

It may not be, particularly if you are running in a non-English locale.
Comparisons will invoke the standard library routine strcoll(), which is
depressingly slow in some locales, at least on some platforms.

> The column (and most others in the table) contains codes that
> designate different characteristics (for ex. in a column 'sex' one would
> find '1'=male, '2'=female).

If you are using numeric codes you are surely better off storing them
as integer or perhaps smallint (integer is less likely to create type
mismatch headaches, though).  In the above example you are getting the
worst of both worlds: there's no mnemonic value in your data entries,
*and* you're paying for textual comparisons.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] performance question

2003-01-20 Thread Tomasz Myrta
Moritz Lennert wrote:


Hello,

I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
MB RAM.
Some queries I launch take quite a long time, and I'm wondering whether
this is normal,or whether I can get better performance somehow.

As an example, I have a field which is in char(2), with 23 different
possible values. 

This field is not too selective. Are there any more fields which are always in where condition? 
If yes, indexing on more than one field should help.

Tomasz Myrta


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] performance question

2003-01-20 Thread Josh Berkus
Moritz,

> I'm sorry, I didn't realize this, it is certainly closer to what I
> need.
> Next time I'll look better (google brought up references to this
> list, so
> I subscribed here).

Hey, there are 18 active lists now ... we don't expect anyone to get
the right one right off!

-Josh Berkus

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Could someone help me fix my array_list function?

2003-01-20 Thread Joe Conway
Guy Fraser wrote:

This is what I want to do:

select attribute,array_list(values,1,sizeof(values)) as value from av_list;

Turn :
 attr6 | {val3,val7,val4,val5}

Into :
 attr6 | val3
 attr6 | val7
 attr6 | val4
 attr6 | val5


You didn't mention the version of PostgreSQL. If you're using < 7.3, good luck 
;-). If you are using 7.3, the following works:

DROP TABLE mail_aliases;
CREATE TABLE mail_aliases(
  a_mailbox text,
  a_destination text[]
);

INSERT INTO mail_aliases VALUES ('alias1', '{dest1}');
INSERT INTO mail_aliases VALUES ('alias2', '{dest2,dest1}');
INSERT INTO mail_aliases VALUES ('alias3', '{dest3,dest4}');
INSERT INTO mail_aliases VALUES ('alias4', '{dest3,dest4,dest5}');
INSERT INTO mail_aliases VALUES ('alias5', '{dest6,dest7}');
INSERT INTO mail_aliases VALUES ('alias6', '{dest3,dest7,dest4,dest5}');

CREATE TYPE mail_aliases_list_type AS (a_mailbox text, a_destination_el text);
CREATE OR REPLACE FUNCTION mail_aliases_list() RETURNS SETOF 
mail_aliases_list_type AS '
DECLARE
  rec record;
  retrec record;
  low int;
  high int;
BEGIN
 FOR rec IN SELECT a_mailbox, a_destination FROM mail_aliases LOOP
  SELECT INTO low
   replace(split_part(array_dims(rec.a_destination),'':'',1),''['',)::int;
  SELECT INTO high
   replace(split_part(array_dims(rec.a_destination),'':'',2),'']'',)::int;

  FOR i IN low..high LOOP
   SELECT INTO retrec rec.a_mailbox, rec.a_destination[i];
   RETURN NEXT retrec;
  END LOOP;
 END LOOP;
 RETURN;
END;
' LANGUAGE 'plpgsql';

regression=# SELECT a_mailbox, a_destination_el FROM mail_aliases_list();
 a_mailbox | a_destination_el
---+--
 alias1| dest1
 alias2| dest2
 alias2| dest1
 alias3| dest3
 alias3| dest4
 alias4| dest3
 alias4| dest4
 alias4| dest5
 alias5| dest6
 alias5| dest7
 alias6| dest3
 alias6| dest7
 alias6| dest4
 alias6| dest5
(14 rows)


HTH,

Joe


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [SQL] Could someone help me fix my array_list function?

2003-01-20 Thread Michiel Lange
Would the same work for pg_user and pg_group?

It would be handy at times to easily check wether or not someone is member 
of a group...
and since in pg_group the usernumbers are stored, one might need to do a 
few lookups:
would it be hard to put such a thing in a view, or is that not-smart 
thinking here?

I have to admit, arrays are still a bit hazy to me, in how to use them 
properly in databases...
so I stick to the older solutions...

Michiel

At 15:27 20-1-2003 -0800, Joe Conway wrote:
Guy Fraser wrote:

This is what I want to do:
select attribute,array_list(values,1,sizeof(values)) as value from av_list;
Turn :
 attr6 | {val3,val7,val4,val5}
Into :
 attr6 | val3
 attr6 | val7
 attr6 | val4
 attr6 | val5


You didn't mention the version of PostgreSQL. If you're using < 7.3, good 
luck ;-). If you are using 7.3, the following works:

DROP TABLE mail_aliases;
CREATE TABLE mail_aliases(
  a_mailbox text,
  a_destination text[]
);

INSERT INTO mail_aliases VALUES ('alias1', '{dest1}');
INSERT INTO mail_aliases VALUES ('alias2', '{dest2,dest1}');
INSERT INTO mail_aliases VALUES ('alias3', '{dest3,dest4}');
INSERT INTO mail_aliases VALUES ('alias4', '{dest3,dest4,dest5}');
INSERT INTO mail_aliases VALUES ('alias5', '{dest6,dest7}');
INSERT INTO mail_aliases VALUES ('alias6', '{dest3,dest7,dest4,dest5}');

CREATE TYPE mail_aliases_list_type AS (a_mailbox text, a_destination_el text);
CREATE OR REPLACE FUNCTION mail_aliases_list() RETURNS SETOF 
mail_aliases_list_type AS '
DECLARE
  rec record;
  retrec record;
  low int;
  high int;
BEGIN
 FOR rec IN SELECT a_mailbox, a_destination FROM mail_aliases LOOP
  SELECT INTO low

replace(split_part(array_dims(rec.a_destination),'':'',1),''['',)::int;
  SELECT INTO high

replace(split_part(array_dims(rec.a_destination),'':'',2),'']'',)::int;

  FOR i IN low..high LOOP
   SELECT INTO retrec rec.a_mailbox, rec.a_destination[i];
   RETURN NEXT retrec;
  END LOOP;
 END LOOP;
 RETURN;
END;
' LANGUAGE 'plpgsql';

regression=# SELECT a_mailbox, a_destination_el FROM mail_aliases_list();
 a_mailbox | a_destination_el
---+--
 alias1| dest1
 alias2| dest2
 alias2| dest1
 alias3| dest3
 alias3| dest4
 alias4| dest3
 alias4| dest4
 alias4| dest5
 alias5| dest6
 alias5| dest7
 alias6| dest3
 alias6| dest7
 alias6| dest4
 alias6| dest5
(14 rows)


HTH,

Joe


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Could someone help me fix my array_list function?

2003-01-20 Thread Joe Conway
Michiel Lange wrote:

Would the same work for pg_user and pg_group?



See:
http://www.brasileiro.net:8080/postgres/cookbook/view-one-recipe.adp?recipe_id=11378

With these groups:
regression=# select * from pg_group;
 groname | grosysid |grolist
-+--+---
 grp1|  100 | {100,101,102}
 grp2|  101 | {100,102}
(2 rows)

Output looks like:
regression=# select * from groupview;
 grosysid | groname | usesysid | usename
--+-+--+-
  100 | grp1|  100 | user1
  100 | grp1|  101 | user2
  100 | grp1|  102 | user3
  101 | grp2|  100 | user1
  101 | grp2|  102 | user3
(5 rows)

Joe



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] performance question

2003-01-20 Thread Vernon Wu
20/01/2003 12:38:20 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

>"Moritz Lennert" <[EMAIL PROTECTED]> writes:
>> One question I asked myself is whether the use of char(2) is the best
>> option.
>
>It may not be, particularly if you are running in a non-English locale.
>Comparisons will invoke the standard library routine strcoll(), which is
>depressingly slow in some locales, at least on some platforms.
>


In the case of selection operations involving multiple tables joined by userid that 
can be in various languages, is a 
potential performance trouble spot? Considering the number of tables anywhere from ten 
to twenty.



>> The column (and most others in the table) contains codes that
>> designate different characteristics (for ex. in a column 'sex' one would
>> find '1'=male, '2'=female).
>
>If you are using numeric codes you are surely better off storing them
>as integer or perhaps smallint (integer is less likely to create type
>mismatch headaches, though).  In the above example you are getting the
>worst of both worlds: there's no mnemonic value in your data entries,
>*and* you're paying for textual comparisons.
>
>   regards, tom lane
>
>---(end of broadcast)---
>TIP 2: you can get off all lists at once with the unregister command
>(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly