Re: [GENERAL] Counting elements of an array

2001-03-02 Thread Renaud Tthonnart

Tom Lane wrote:

 Renaud Tthonnart [EMAIL PROTECTED] writes:
  I would like to know how I can get the number of elements of an array.

 There is a function that returns an array's dimensions as a text string:

 regression=# select array_dims( '{1,2,3}'::int[] );
  array_dims
 
  [1:3]
 (1 row)

 regression=# select array_dims( '{{1,2,3},{4,5,6}}'::int[] );
  array_dims
 
  [1:2][1:3]
 (1 row)

 regards, tom lane

Thanks very much Tom,  you have helped me a lot.
Renaud THONNART




[GENERAL] Triggered data change violation

2001-03-02 Thread Peter Vazsonyi

Hello!

I have a table, with an update trigger (it change the value of
'last_modify' field to current timestamp)
Sometimes if i try delete rows in this table (before delete, there are some
inserts after a 'begin') i get this change violation error. I don't now
why.
Can somebody help me?

Thank's
--
 nek.

(postgresql 7.0.2)




Re: [GENERAL] serial properties

2001-03-02 Thread Eric G. Miller

On Thu, Mar 01, 2001 at 04:49:25PM -0300, Martin A. Marques wrote:
 Hi, I would like to know which are the properties of the SERIAL type.
 Is a column defined SERIAL a primary key?
 
 Saludos... :-)

create table foo (
id serial primary key,
data text not null check(char_length(data)  0)
);

Note: SERIAL isn't really a "type".  The data type of "id" is an integer
(oid I think??), and some hooks to use a SEQUENCE for the default value
of "id" are created (as is the SEQUENCE).  If you drop the table, you
also need to drop the sequence that "SERIAL" creates.

IMHO, automatically incremented number fields used for primary keys are
both a blessing and a curse.  It is almost always better to use some
other data that *means something* for a primary key.  If there's no
possible candidate key, *then* maybe an autonumber key is appropriate.

-- 
Eric G. Miller [EMAIL PROTECTED]



Re: [GENERAL] showing rules/triggers with psql

2001-03-02 Thread Richard Huxton

From: "Metzidis, Anthony" [EMAIL PROTECTED]


 hey,
 kind of a simple question. How can you show the rules/triggers on a
database
 using psql. Even better: how can you show all
rules/triggers/constrains/etc
 that pertain to a given table?

 as always...thanks.

 --tony

The 7.1 docs have some useful info on system tables, but here's something
from my notes. The RI_Constraint... triggers are automatically generated.

You might want to make this into a view if you use it frequently.


richardh= select cl.relname,tr.tgname as triggername, tr.tgenabled,
fn.proname as func_name from pg_trigger as tr, pg_class as cl,
pg_proc as fn where tr.tgrelid=cl.oid and tr.tgfoid=fn.oid and cl.relname ~
'^';

  relname  | triggername | tgenabled |  func_name
---+-+---+--
 pg_shadow | pg_sync_pg_pwd  | t | update_pg_pwd
 company   | RI_ConstraintTrigger_121307 | t | RI_FKey_check_ins
 e_country | RI_ConstraintTrigger_121309 | t | RI_FKey_noaction_del
 e_country | RI_ConstraintTrigger_121311 | t | RI_FKey_noaction_upd
 person| RI_ConstraintTrigger_121389 | t | RI_FKey_check_ins
 company   | RI_ConstraintTrigger_121391 | t | RI_FKey_noaction_del
 company   | RI_ConstraintTrigger_121393 | t | RI_FKey_noaction_upd
 foo   | foo_c_lctrig| t | foo_c_lcupdate
(8 rows)

- Richard Huxton




[GENERAL] how to examin the database structure with an sql command

2001-03-02 Thread Hans Jeuken


I am busy writing a table editor in Perl, using the module Pg.pm as 
interface to Postgresql. When opening a table in that editor I would like 
to find out how the tables in the database are defined.

So far I have considered doing a pg_dump to a file and then examine the 
content, but it is so unelegant.

then I have considered the query "SELECT  * FROM pga_layout". Problem here 
is that the table pga_layout only not defined when the database has no 
tuples yet.

Who is able to give me the big hint here?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Re: vacuum and backup

2001-03-02 Thread Ian Harding

Lincoln Yeoh wrote:

 At 01:52 PM 27-02-2001 -0500, Mitch Vincent wrote:
 
 I agree 100% that a shell script is probably the best way to go.. All you're
 doing is running a few utilities, there is hardly any processing on the part
 of the script that calls the utilities so there isn't much of a need for
 anything more than what sh (or what ever you like) has to offer..
 

 How would I automate backup if I use password authentication?

 Cheerio,
 Link.

Expect!


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Is this a bug?

2001-03-02 Thread Ian Harding

Limin Liu wrote:

 Hi,

 Can anyone please tell me whether this is a bug or not?

 I used the following SQL to create my table and view with some data.

 ===
 create table tbl (no int, cnt int);
 create view tbl_view as
   select no, count(cnt) as count
   from tbl
   group by no
   having count(cnt)  2;
 insert into tbl values(1, 1);
 insert into tbl values(1, 2);
 insert into tbl values(1, 3);
 insert into tbl values(1, 4);
 insert into tbl values(2, 1);
 insert into tbl values(2, 2);
 insert into tbl values(2, 3);
 ===

 Here is soem SQL I issued; first one looks fine to me, but the second
 one is weird.

 ===
 db= select * from tbl_view;
  no | count
 +---
   1 | 4
   2 | 3
 (2 rows)

 db= select * from tbl_view where count  5;
  no | count
 +---
   1 | 4
   2 | 3
 (2 rows)

 db2=
 ===

 Thanx

 LM.Liu

I get ERROR: Rewrite: Aggregate of views only allowed on single tables
for now

I think it may have something to do with the reserved sql keyword
'count' being used.

Ian



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] Re: Find out when a table was last changed ?

2001-03-02 Thread Jonathan Ellis

Better yet, define a trigger so you and/or other programmers don't have
to worry about including that field in every update.

-Jonathan

- Original Message - 
From: Anand Raman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, February 27, 2001 8:28 PM
Subject: Re: Find out when a table was last changed ?


 No u need to maintain this information in another column such as
 last_update_date which your application should update everyt time u
 perform any operation on it..
 
 Hope this helps
 Anand
 
 On Wed, Feb 28, 2001 at 04:24:52AM +0100, Markus Fischer wrote:
 Hi,
  Is there some kind of command or internal PostgreSql
 column to find out when a table was last changed (INSERT, UPDATE
 or DELETE) on a 7.0.2 Server ?
 
 thanks,
  Markus
 
 -- 
 Markus Fischer,  http://josefine.ben.tuwien.ac.at/~mfischer/
 EMail: [EMAIL PROTECTED]
 PGP Public  Key: http://josefine.ben.tuwien.ac.at/~mfischer/C2272BD0.asc
 PGP Fingerprint: D3B0 DD4F E12B F911 3CE1  C2B5 D674 B445 C227 2BD0
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] platforms

2001-03-02 Thread WG4- Verma, Ashish

Hi,

I have an urgent question to ask.

Can PostgreSQL run on Windows 95, 98, NT, 2000 platforms? If not, what is an
alternative to using PostgreSQL in Windows?

A quick response will be appreciated.

Thanks
Ash

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] If this isn't too crassly commercial ... looking for tech editors

2001-03-02 Thread Patricia . Barnes

I'm assured that this message won't be greeted with universal condemnation from the 
Postgres community - I'm an editor at Sams Publishing who is looking for a technical 
editor for a PostgreSQL developer's handbook. Interested? Send resume to 
[EMAIL PROTECTED] Thanks. 

Patricia G. Barnes
Acquisitions Editor
Sams Publishing
201 W. 103rd Street
Indianapolis, Indiana 46290-1097
(317) 581-3627
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Re: Help with pq++

2001-03-02 Thread Aristide Aragon

On Tue, Feb 27, 2001 at 10:09:49AM +0100, Jose Manuel Lorenzo Lopez wrote:
 Aristide Aragon schrieb:
  On Mon, Feb 26, 2001 at 01:53:18PM -0500, Belcher, Jim wrote:

Hello

OK, I got the problem sorted out. The error message I received was "Error Message: 
FATAL 1:  Database "mydb" does not exist in the system catalog."
That makes sense, the machine running this program doesn't have a mydb database, 
however I set the enviroment variable PGHOST to some other computer (and I just tested 
if I run the program there it returns OK).
While I won't be running this program over a network, I do would like to know what's 
going on.
How would I do it so that my program accesses the DB in another host? The 
documentation said (or at least that's what I understood) to export the PGHOST 
variable to the address of that other machine.
I'll be running my program web-based, if I wanted it to connect to some other host 
then, how'd I do it?

Thanks

Aristide

 Hello Aristide,
 
 I guess it's not your program, which seems okay, but your user.
 Please check if the user which is supposed to execute the program
 is added to the DB. If not do a createuser for the mentioned user.
 
 To be more sure add the following lines in the else-branch of your 
 source-code:
 
 // Get a detailed message about the error ocurred
 fprintf(stderr, "Error Message: %s", db.ErrorMessage());
 
 or check the postmaster log after executing the program.
 
 Hope this will help you and give you confidence back for programming
 apps with postgresql! :)
 
 Best Regards / Un saludo / Mit freundlichen Gren / Cordiali Saluti
 
 Jos Manuel Lorenzo Lpez
 
 -- 
 **
 ** Jos Manuel Lorenzo Lpez**
 **  **
 ** ICA Informationssysteme Consulting  Anwendungsgesellschaft mbH  **
 ** Dept. SAP Basis R/3  VBue**
 **  **
 ** fax  : (+49511) 938-2270 **
 ** e-mail to: [EMAIL PROTECTED]**
 **
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] trouble with some characters

2001-03-02 Thread Steff

I'm using PostgreSQL 7.0.3 undr Mandrake and have the following problem :

have columns with varchar or text in, some characters cannot be recognized,
as è or à for examples.
I got ) instead of è for example.
How can I do to have them recognized ??

Thanks in advance

Stephane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Migrate from MS SQL 6.5 to postgres??

2001-03-02 Thread Gordon A. Runkle

In article [EMAIL PROTECTED], "Unknown"
[EMAIL PROTECTED] wrote:

 I have an idea that might help I found ODBC to be very slow for
 importing data So I wrote a program in C that reads in dump files of SQL
 text on the Linux server itself E.G. first line is a create table, next
 lines are all the insert's This is very fast, 80mb of data in about 15
 minutes Only problem is the text files need to be formatted a bit
 specially If you can write a program in say VB to create the text files
 (one per table) it could work. If you are interested I could forward my
 C program and Foxpro prg that creates the text files that you could
 convert to VB

Why make it so difficult?  SQL Server provides a perfectly
usable bulk copy utility (bcp.exe), which will haul the data
out ready-to-go. 

H:\tmp bcp dbname..tabname out filename.del -c -t "|" -r "\n" \
-S server -U user -P password

This will pull the data out, with '|' as the field delimiter
and a newline as a record separator.

Now you can COPY the data in using '|' as the delimiter.

If you have BLOB data types, those tables will have to
be handled in another way, of course.

Gordon.
-- 
It doesn't get any easier, you just go faster.
   -- Greg LeMond

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



RE: [GENERAL] Slowdown problem when writing 1.7million records

2001-03-02 Thread Stephen Livesey



 No, it's not.  Do you have any triggers or rules on this table that
 you haven't shown us?  How about other tables referencing this one
 as foreign keys?  (Probably not, if you're running an identical test
 on MySQL, but I just want to be sure that I'm not missing something.)

I have no triggers or rules.  I have only created 1 table and their are no
foreign keys.


 How exactly are you writing the records?

First I read the data from a 'Powerflex' file and hold this in a record set.
 pfxstmt = pfxconn.createStatement();
 pfxrs = pfxstmt.executeQuery("SELECT * from expafh");

I then perform a loop which writes the data to my 'Postgresql' file as
follows:
  stmt = conn.createStatement();
while (pfxrs.next()) {
cmd = "INSERT INTO expafh VALUES ";
cmd = cmd +
"('"+pfxrs.getString(2)+"',"+pfxrs.getString(3)+",'"+pfxrs.getString(4)+"','
"+pfxrs.getString(5)+"')";
stmt.executeUpdate(cmd);
}


 I have a suspicion that the slowdown must be on the client side (perhaps
 some inefficiency in the JDBC code?) but that's only a guess at this
 point.


I have used identical code for all of my testing, the only changes being
which drivers I use to access the data.


Thanks
Stephen Livesey

Legal Disclaimer:
Internet communications are not secure and therefore Exact Abacus does 
not accept legal responsibility for the contents of this message.  Any views
or opinions presented are solely those of the author and do not necessarily
represent those of Exact Abacus unless otherwise specifically stated.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] postgresql upgrades - where's the dump go?

2001-03-02 Thread Fabrice Scemama

Why don't you read the INSTALL file, included in
any pg distribution ?

On Tue, 27 Feb 2001, will trillich wrote:

 when upgrading from version X to version Y of postgres, the
 install appears to dump the data and schema, and then suck it
 back in with the new binaries.
 
 [i just tried going from 6.5.3 to 7.0.3potato but the plpgsql.so
 module wasn't installed and data types were conflicting, so the
 new schema is a bit scrambled -- but if i can get to the
 resurrection file, i'll be fine...]
 
 so where's the dump file that gets created? mine had some snags
 and i can fix it if i can edit the dump text. (i presume it's a
 series of SQL commands...)
 
 anybody know where that file is?
 
 -- 
 [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Want to learn the Postgresql database system

2001-03-02 Thread Jonny Deng

Hi,

I want to learn the Postgresql database system as part of my study. What I
need to know whether there is documentation that explain the source codes.

Anyone can help me. I am really eager to understand the underlying
Postgresql database system.

Any help would be appreciated.

Thanks
Jonny



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] PG_PWD and PG_SHADOW

2001-03-02 Thread Anthony Metzidis

Hi,
When I 'CREATE USER testuser WITH PASSWORD 'mypassword';
I see an entry in PG_PWD with the password 'mypassword' in plaintext.

In my pg_hba.conf I have all hosts using 'password' authentication with no
file argument.  Is there any way to keep postgres from saving the passwords
in PG_PWD?  I thought that passwords were to be saved in PG_SHADOW.  What is
PG_SHADOW for anyway?

If you have an answer, can you please send it to me via email?

Thanks.

--tony



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] store procedure in pl/pgsql

2001-03-02 Thread Raymond Chui

Juan Ramn Cortabitarte wrote:

 Hello,

 I'm trying to do some store procedure in pl/pgsql but the sql server says
 me:

 Error: ERROR:  Unrecognized language specified in a CREATE FUNCTION:
 'plpgsql'.  Recognized languages are sql, C, internal and the created
 procedural languages.

 Im using Red Hat Linux 7.0 and Postgress 7.02.
 i executed:

 #createlang plpgsql dbhtc

 createlang: missing required argument PGLIB directory

createlang --username=postgres --dbname=yourdb --pglib=/usr/lib/pgsql plpgsql
yourdb

If you don't see plpgsql.so in PGLIB directory, then you better re-install
your
PostgreSQL.





begin:vcard 
n:Chui;Raymond
tel;fax:(301)713-0963
tel;work:(301)713-0624 Ext. 168
x-mozilla-html:TRUE
url:http://members.xoom.com/rchui/
org:NWS, NOAA
version:2.1
email;internet:[EMAIL PROTECTED]
title:SA, DBA
note:ICQ #: 16722494
adr;quoted-printable:;;NOAA, NWS, Office of Hydrology, OH=0D=0A1325 East-West Highway, Room 8112;Silver Spring;MD;20910-3283;U.S.A.
x-mozilla-cpt:;-6384
fn:Raymond Chui
end:vcard



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Re: SERIAL values

2001-03-02 Thread Michael Fork

This is an incorrect statement (as pointed out in the FAQ)

4.16.3) Don't currval() and nextval() lead to a race condition with other
users?
No. This is handled by the backends.

More info:
http://www.postgresql.org/users-lounge/docs/7.0/user/sql-createsequence.htm

When calling currval, you recieve the last value used by *your backend*
(regardless of what others have done).

Michael Fork - CCNA - MCP - A+
Network Support - Toledo Internet Access - Toledo Ohio

On Fri, 2 Mar 2001, Gregory Wood wrote:

 SELECT currval('Table_SerialField_SEQ');
 
 Note: This selects the last value of the sequence defined by your SERIAL
 sequence, not necessarily the value in your INSERT. For example:
 
 1. User A INSERTs.
 2. User B INSERTs.
 3. User A performs SELECT currval, which returns the value for User B's
 INSERT.
 
 If this could be a potential problem, I recommend peforming a:
 
 SELECT nextval('Table_SerialField_SEQ');
 
 *before* you perform the INSERT, and explicitly using that value.
 
 Greg
 
 - Original Message -
 From: "Kostis Mentzelos" [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Thursday, March 01, 2001 5:53 PM
 Subject: SERIAL values
 
 
  Hi all,
 
  how can I get the current values from SERIAL types after an INSERT using
 
  libpq?
 
 
  Thanks,
  kostis.
 
  ---(end of broadcast)---
  TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 
 
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Re: Want to learn the Postgresql database system

2001-03-02 Thread Bruce Momjian

[ Charset ISO-8859-1 unsupported, converting... ]
 Lots of documentation here :
 
 http://www.postgresql.org/devel-corner/index.html

I have just added a section on MVCC.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Convert to upper

2001-03-02 Thread Peter Schindler

Matthias,

the easiest way is to use CHECK constraints. see example below.

Servus,
Peter

Matthias Teege wrote:
 is there any way to limit values to upper case strings?
 Somthing like:
 
 name char(15) DEFAULT (upper(name))
 
 or must I use triggers?

test=# create table bla(bb char(10) CHECK (bb =UPPER(bb)));
CREATE
test=# \d bla
 Table "bla"
 Attribute | Type  | Modifier
---+---+--
 bb| character(10) |
Constraint: ((bb)::text = upper((bb)::text))

test=# insert into bla values ('OTTO');
INSERT 381409 1
test=# insert into bla values ('otto');
ERROR:  ExecAppend: rejected due to CHECK constraint bla_bb
ERROR:  ExecAppend: rejected due to CHECK constraint bla_bb
test=# select * from bla;
 bb

 OTTO
(1 row)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Perl DBI/DBD::Pg confusion with finish

2001-03-02 Thread David Lynn


 What are the consequences of not calling -finish()?  I have several apps
 using DBD::Pg, and I don't think I've used it at all...

There don't appear to be any fatal consequences - all queries appear to
work fine.  However, certain things such as the $sth-{'Active'}
attribute cannot be relied upon.

I guess we will just write this off as another "feature" to ignore! 
Thanks for the input, everybody.

--David

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Perl DBI/DBD::Pg confusion with finish

2001-03-02 Thread David Wheeler

On Fri, 2 Mar 2001, John Madden wrote:

 What are the consequences of not calling -finish()?  I have several apps 
 using DBD::Pg, and I don't think I've used it at all...

It just means that the statement handle is marked finished, and if you use
it again, it won't issue a warning saying that it's not.

HTH,

David


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Re: Re: Convert to upper

2001-03-02 Thread Mitch Vincent

Oh, I misunderstood.. I read his post that he wanted anything inserted to be
converted to upper case.. If you're just looking to throw an error if it's
not upper case, the check constraint is the way to go..

My apologies, I'll try and read more carefully..

-Mitch

- Original Message -
From: "Rod Taylor" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, March 02, 2001 11:05 AM
Subject: Re: Re: Convert to upper


 check

 create table uppercase (
   name char(15) check (name ~ '[A-Z]')
 );

 --
 Rod Taylor

 There are always four sides to every story: your side, their side, the
 truth, and what really happened.
 - Original Message -
 From: "Mitch Vincent" [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, March 02, 2001 10:52 AM
 Subject: [GENERAL] Re: Convert to upper


   is there any way to limit values to upper case strings?
   Somthing like:
  
   name char(15) DEFAULT (upper(name))
  
   or must I use triggers?
 
  I'd say a trigger is your best bet.
 
  -Mitch
 
 
  ---(end of
 broadcast)---
  TIP 2: you can get off all lists at once with the unregister command
  (send "unregister YourEmailAddressHere" to
 [EMAIL PROTECTED])
 


 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



RE: [GENERAL] Convert to upper

2001-03-02 Thread Trewern, Ben
Title: RE: [GENERAL] Convert to upper





It may be better using a trigger. You don't then get the error message (from the constraint) the record is just updated with the uppercase version of what was inserted.

Regards


Ben 


 -Original Message-
 From: Peter Schindler [mailto:[EMAIL PROTECTED]]
 Sent: 02 March 2001 16:16
 To: Matthias Teege
 Cc: [EMAIL PROTECTED]
 Subject: Re: [GENERAL] Convert to upper
 
 
 Matthias,
 
 the easiest way is to use CHECK constraints. see example below.
 
 Servus,
 Peter
 
 Matthias Teege wrote:
  is there any way to limit values to upper case strings?
  Somthing like:
  
  name char(15) DEFAULT (upper(name))
  
  or must I use triggers?
 
 test=# create table bla(bb char(10) CHECK (bb =UPPER(bb)));
 CREATE
 test=# \d bla
 Table bla
 Attribute | Type | Modifier
 ---+---+--
 bb | character(10) |
 Constraint: ((bb)::text = upper((bb)::text))
 
 test=# insert into bla values ('OTTO');
 INSERT 381409 1
 test=# insert into bla values ('otto');
 ERROR: ExecAppend: rejected due to CHECK constraint bla_bb
 ERROR: ExecAppend: rejected due to CHECK constraint bla_bb
 test=# select * from bla;
 bb
 
 OTTO
 (1 row)
 
 ---(end of 
 broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to 
 [EMAIL PROTECTED])
 





RE: [GENERAL] Does PostgreSQL support Constant Expression (Alias Name)?

2001-03-02 Thread Trewern, Ben
Title: RE: [GENERAL] Does PostgreSQL support Constant Expression (Alias Name)?







Try SELECT fname as First Name, lname as Last Name FROM aTable;


Regards


Ben


 -Original Message-
 From: Raymond Chui [mailto:[EMAIL PROTECTED]]
 Sent: 02 March 2001 14:30
 To: [EMAIL PROTECTED]
 Subject: [GENERAL] Does PostgreSQL support Constant Expression (Alias
 Name)?


 If a table has columns fname, lname.
 I want to do query like

 SELECT fname First Name, lname Last Name FROM aTable;

 This SELECT statement is fine in Oracle, Sybase and Informix.
 But I get an error message in PostgreSQL

 Because I want the output like

 First Name | Last Name
 --
 fn | ln

 instead default like

 fname | lname
 ---
 fn | ln

 What is the correct way doing this in PostgreSQL?
 Thank you very much in advance!



 --Raymond

 





Re: [GENERAL] Does PostgreSQL support Constant Expression (Alias Name)?

2001-03-02 Thread Oliver Elphick

Raymond Chui wrote:
  This is a multi-part message in MIME format.
  --CDE89E33286CEE4876F664CE
  Content-Type: text/plain; charset=us-ascii
  Content-Transfer-Encoding: 7bit
  
  If  a table has columns fname, lname.
  I want to do query like
  
  SELECT fname "First Name", lname "Last Name" FROM aTable;

SELECT fname AS "First Name", lname AS "Last Name" FROM aTable;

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "I will lift up mine eyes unto the hills, from whence 
  cometh my help. My help cometh from the LORD, which 
  made heaven and earth."  
   Psalms 121:1,2 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Convert to upper

2001-03-02 Thread Rod Taylor
Title: RE: [GENERAL] Convert to upper



Why not just do:

INSERT INTO TABLE (uppercase_value) VALUES 
(upper('value'));

Leave in the check, and all problems are solved 
without overhead of a trigger. Simple checks like the one shown don't have 
any noticeable speed loss. Trigger overhead does no matter how small the 
operation its doing.
--Rod Taylor

There are always four sides to every story: your side, their side, the 
truth, and what really happened.

  - Original Message - 
  From: 
  Trewern, 
  Ben 
  To: [EMAIL PROTECTED] 
  
  Sent: Friday, March 02, 2001 12:35 
  PM
  Subject: RE: [GENERAL] Convert to 
  upper
  
  It may be better using a trigger. You don't then get the 
  error message (from the constraint) the record is just updated with the 
  uppercase version of what was inserted.
  Regards 
  Ben 
   -Original Message-  
  From: Peter Schindler [mailto:[EMAIL PROTECTED]] 
   Sent: 02 March 2001 16:16  
  To: Matthias Teege  Cc: [EMAIL PROTECTED] 
   Subject: Re: [GENERAL] Convert to upper
  Matthias,   the 
  easiest way is to use CHECK constraints. see example below.   Servus,  
  Peter   Matthias Teege 
  wrote:   is there any way to limit values to 
  upper case strings?   Somthing like: 
  name char(15) 
  DEFAULT (upper(name)) or must I use triggers?  
   test=# create table bla(bb char(10) CHECK (bb 
  =UPPER(bb)));  CREATE  
  test=# \d bla  
  Table "bla"  Attribute 
  | Type | Modifier 
   ---+---+--  bb | character(10) 
  |  Constraint: ((bb)::text = 
  upper((bb)::text))   
  test=# insert into bla values ('OTTO');  INSERT 
  381409 1  test=# insert into bla values 
  ('otto');  ERROR: ExecAppend: rejected due 
  to CHECK constraint bla_bb  ERROR: 
  ExecAppend: rejected due to CHECK constraint bla_bb  test=# select * from bla;  bb  
    OTTO  (1 row)   
  ---(end of  
  broadcast)---  TIP 2: you 
  can get off all lists at once with the unregister command  (send "unregister YourEmailAddressHere" to 
   [EMAIL PROTECTED])  


Re: [GENERAL] Re: Thought on OIDs

2001-03-02 Thread Rod Taylor

I use XID's regularly now for historical purposes (delayed reversion
of entire operations -- handled by an interface of course where
appropriate) but OID's I could certainly live without.  However, PHP
currently returns the OID in from pg_getlastoid() which I use to
select from the table the last PRIMARY KEY entry.  Getting this key
before sometimes isn't an option (triggers handle them sometimes).  If
I could have a pg_getlastprimarykey() function which returns a hash of
name / value pairs of the new key without using the OID it would be
ideal.

--
Rod Taylor

There are always four sides to every story: your side, their side, the
truth, and what really happened.
- Original Message -
From: "Peter Eisentraut" [EMAIL PROTECTED]
To: "Rod Taylor" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, March 02, 2001 11:31 AM
Subject: Re: [GENERAL] Re: Thought on OIDs


 Rod Taylor writes:

  Someones bound to hit it in a year or 2 as Postgres is getting
pretty
  good for large projects as well as the small, especially with
7.1's
  speed enhancements.  Hopefully 7.2 will create cycling OIDs and
XIDs.
  Then less problems in 'unlimited' extendability.

 The easiest approach for OIDs will probably be making them optional
in the
 first place.  For the vast majority of users, the OIDs are just
wasting
 space.

 The cycling XID idea is based on the assertion that eventually all
 transactions will be closed, at which time a record is either known
 committed or known dead so that the XID can be recycled.  For OIDs,
this
 is not practical.  And if you wanted OIDs that automatically fill in
the
 holes, that's probably not realistic.

 --
 Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/


 ---(end of
broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: serial properties

2001-03-02 Thread adb

I agree that they are very handy.  They become a major pain in
the butt when you start doing replication between servers.
For instance if you fail over to a standby server and you
forget to update it's sequence first, merging data later 
becomes a nightmare.  I'd like to have int8 sequences and
basically give each server it's own block of numbers to work
with.

Alex.

On Fri, 2 Mar 2001, Gregory Wood wrote:

  IMHO, automatically incremented number fields used for primary keys are
  both a blessing and a curse.  It is almost always better to use some
  other data that *means something* for a primary key.  If there's no
  possible candidate key, *then* maybe an autonumber key is appropriate.
 
 Just wanted to say, I disagree strongly here (also MHO). I see quite a few
 benefits and very few drawbacks to using an auto-incrementing field for a
 primary key. In fact, the only drawback I can think of would be that it
 takes up a little more space per record to add a field used solely to
 uniquely identify that record. I can think of several drawbacks to a
 non-auto-incrementing primary key though:
 
 1. Less efficient joins. Comparing integers is about as easy as it gets...
 text, char, and varchar require string comparisons, while floating point
 numbers are not good as keys because of rounding errors.
 2. Discourages value changes. A value that "means something" might need to
 be modified in some manner. Sure you can define foreign keys with CASCADEs,
 but if you are using an auto-increment, you don't need to!
 3. No value is guaranteed to be unique (well, when doing an INSERT or
 UPDATE... it only gets into the database if it *is* unique) unless all
 queries go through a critical section. To the best of my knowledge, the only
 way to do this inside the database is to use nextval either implicitly or
 explicitly.
 
 The only time I don't use auto-incrementing fields is when I have a
 many-to-many join table with two foreign keys that are both
 auto-incrementing fields, in which case the primary key is a combination of
 those two fields. Other than a bit of extra space, I don't see any reason
 not to.
 
 Greg
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re: Thought on OIDs

2001-03-02 Thread adb

I personally would like to see 8byte OIDs or at least int8 sequences,  I'm
a little worried about the pain of managing a potential rollover when I'm 
using sequences as a replication key between servers.

Alex.

On Fri, 2 Mar 2001, Peter Eisentraut wrote:

 Rod Taylor writes:
 
  Someones bound to hit it in a year or 2 as Postgres is getting pretty
  good for large projects as well as the small, especially with 7.1's
  speed enhancements.  Hopefully 7.2 will create cycling OIDs and XIDs.
  Then less problems in 'unlimited' extendability.
 
 The easiest approach for OIDs will probably be making them optional in the
 first place.  For the vast majority of users, the OIDs are just wasting
 space.
 
 The cycling XID idea is based on the assertion that eventually all
 transactions will be closed, at which time a record is either known
 committed or known dead so that the XID can be recycled.  For OIDs, this
 is not practical.  And if you wanted OIDs that automatically fill in the
 holes, that's probably not realistic.
 
 -- 
 Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



RE: [GENERAL] Slowdown problem when writing 1.7million records

2001-03-02 Thread Creager, Robert S


I just joined this list, so pardon if this has been suggested.

Have you tried 'COPY expafh FROM stdin', rather than inserting each record?
I'm managing a 2.5 million record import, creating a btree index on two
columns, and then vacuuming the db in 36 minutes (on an Ultra 5 - similar to
a AMD K6-2 500).  The data is being read from a 600Mb file.

I'm also using 7.1beta5 with the -F flag on the backend (prevents db server
from flushing after each transaction - can be dangerous, but the server is
faster).  I've attached a Perl script I use - the key being the putline
command.

Note that when using COPY, default values, sequences, etc. are not used.  If
you have a SERIAL field, you have to put in the incrementing values
yourself, and then use 'setval' to get thing correct again.

I apologize for the Perl script - it's not commented.  If you have trouble
understanding it, let me know and I'll spruce it up.

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



 -Original Message-
 From: Stephen Livesey [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 28, 2001 2:20 AM
 To: Tom Lane
 Cc: [EMAIL PROTECTED]
 Subject: RE: [GENERAL] Slowdown problem when writing 
 1.7million records
 
 
 
 
  No, it's not.  Do you have any triggers or rules on this table that
  you haven't shown us?  How about other tables referencing this one
  as foreign keys?  (Probably not, if you're running an identical test
  on MySQL, but I just want to be sure that I'm not missing 
 something.)
 
 I have no triggers or rules.  I have only created 1 table and 
 their are no
 foreign keys.
 
 
  How exactly are you writing the records?
 
 First I read the data from a 'Powerflex' file and hold this 
 in a record set.
  pfxstmt = pfxconn.createStatement();
  pfxrs = pfxstmt.executeQuery("SELECT * from expafh");
 
 I then perform a loop which writes the data to my 'Postgresql' file as
 follows:
   stmt = conn.createStatement();
   while (pfxrs.next()) {
 cmd = "INSERT INTO expafh VALUES ";
 cmd = cmd +
 "('"+pfxrs.getString(2)+"',"+pfxrs.getString(3)+",'"+pfxrs.get
 String(4)+"','
 "+pfxrs.getString(5)+"')";
 stmt.executeUpdate(cmd);
 }
 
 
  I have a suspicion that the slowdown must be on the client 
 side (perhaps
  some inefficiency in the JDBC code?) but that's only a guess at this
  point.
 
 
 I have used identical code for all of my testing, the only 
 changes being
 which drivers I use to access the data.
 
 
 Thanks
 Stephen Livesey
 
 Legal Disclaimer:
 Internet communications are not secure and therefore Exact 
 Abacus does 
 not accept legal responsibility for the contents of this 
 message.  Any views
 or opinions presented are solely those of the author and do 
 not necessarily
 represent those of Exact Abacus unless otherwise specifically stated.
 
 ---(end of 
 broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send "unregister YourEmailAddressHere" to 
 [EMAIL PROTECTED])
 


 tassivSeed.pl


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re: serial properties

2001-03-02 Thread Rod Taylor

Currently there's a method that an individual backend can cache  1
number from a sequence.  Would it be practical to have a master
control the sequences and let the replicated backends (different
networks potentially) cache a 'slew' of numbers for use?  Standard
cache of 1, and inter-server cache of several hundred.  Rules apply as
normal from there -- of course this breaks down when the master goes
down...

--
Rod Taylor

There are always four sides to every story: your side, their side, the
truth, and what really happened.
- Original Message -
From: "adb" [EMAIL PROTECTED]
To: "Gregory Wood" [EMAIL PROTECTED]
Cc: "PostgreSQL-General" [EMAIL PROTECTED]
Sent: Friday, March 02, 2001 2:11 PM
Subject: Re: [GENERAL] Re: serial properties


 I agree that they are very handy.  They become a major pain in
 the butt when you start doing replication between servers.
 For instance if you fail over to a standby server and you
 forget to update it's sequence first, merging data later
 becomes a nightmare.  I'd like to have int8 sequences and
 basically give each server it's own block of numbers to work
 with.

 Alex.

 On Fri, 2 Mar 2001, Gregory Wood wrote:

   IMHO, automatically incremented number fields used for primary
keys are
   both a blessing and a curse.  It is almost always better to use
some
   other data that *means something* for a primary key.  If there's
no
   possible candidate key, *then* maybe an autonumber key is
appropriate.
 
  Just wanted to say, I disagree strongly here (also MHO). I see
quite a few
  benefits and very few drawbacks to using an auto-incrementing
field for a
  primary key. In fact, the only drawback I can think of would be
that it
  takes up a little more space per record to add a field used solely
to
  uniquely identify that record. I can think of several drawbacks to
a
  non-auto-incrementing primary key though:
 
  1. Less efficient joins. Comparing integers is about as easy as it
gets...
  text, char, and varchar require string comparisons, while floating
point
  numbers are not good as keys because of rounding errors.
  2. Discourages value changes. A value that "means something" might
need to
  be modified in some manner. Sure you can define foreign keys with
CASCADEs,
  but if you are using an auto-increment, you don't need to!
  3. No value is guaranteed to be unique (well, when doing an INSERT
or
  UPDATE... it only gets into the database if it *is* unique) unless
all
  queries go through a critical section. To the best of my
knowledge, the only
  way to do this inside the database is to use nextval either
implicitly or
  explicitly.
 
  The only time I don't use auto-incrementing fields is when I have
a
  many-to-many join table with two foreign keys that are both
  auto-incrementing fields, in which case the primary key is a
combination of
  those two fields. Other than a bit of extra space, I don't see any
reason
  not to.
 
  Greg
 
 
  ---(end of
broadcast)---
  TIP 2: you can get off all lists at once with the unregister
command
  (send "unregister YourEmailAddressHere" to
[EMAIL PROTECTED])
 


 ---(end of
broadcast)---
 TIP 1: subscribe and unsubscribe commands go to
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: pgsql for Python

2001-03-02 Thread Shaw Terwilliger

Joel Burton wrote:
 PyGreSQL is more commonly used, and has (IMHO) a simpler, more dict-like
 interface, but isn't (AFAIK) thread-safe, nor DB API compliant.

I wrote a small web application server in python (www.lloop.com) using
Python and PyGreSQL.  PyGreSQL was (mostly) thread safe, in practice, when
I started using it.  I did a little bit of concurrency testing, found a few
uninitialized variables in the PyGreSQL module that caused crashes, but
these were easily fixed.  If by thread-safe you mean a connection can be
safely shared between multiple concurrent threads, I don't know what to say
(never tried that).  Pooling PyGreSQL connections and checking them in and
out to Python threads seemed to work well.

I haven't done much Python work lately, so I haven't tried PoPy (it sounds
nice).

-- 
Shaw Terwilliger [EMAIL PROTECTED]
SourceGear Corporation
217.356.0105 x 641

 PGP signature


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: serial properties

2001-03-02 Thread adb

The caching of sequence numbers is one of the reasons you can
end up wrapping around.  A while back I was working with sybase
and our dba had set the precache for some identity columns (same as
postgres serial) pretty high and we ran into a sybase bug that caused
the server to reboot a large number of times in a few weeks and next thing
we knew we had huge gaps in the sequence and were getting pretty close to
dealing with wrap around.

If the sequence could be int8, I'd just allocate a couple billion for each
server and not worry about it.  It would be handy to have some master
control of handing out blocks of sequence numbers.  That's esentially
what we're building at my current job but it would be great to not have
to worry about it.

I'm guessing that the 7.2 release if it indeed supports more than just
basic one way replication will have to do something similar.

Alex.

On Fri, 2 Mar 2001, Rod Taylor wrote:

 Currently there's a method that an individual backend can cache  1
 number from a sequence.  Would it be practical to have a master
 control the sequences and let the replicated backends (different
 networks potentially) cache a 'slew' of numbers for use?  Standard
 cache of 1, and inter-server cache of several hundred.  Rules apply as
 normal from there -- of course this breaks down when the master goes
 down...
 
 --
 Rod Taylor
 
 There are always four sides to every story: your side, their side, the
 truth, and what really happened.
 - Original Message -
 From: "adb" [EMAIL PROTECTED]
 To: "Gregory Wood" [EMAIL PROTECTED]
 Cc: "PostgreSQL-General" [EMAIL PROTECTED]
 Sent: Friday, March 02, 2001 2:11 PM
 Subject: Re: [GENERAL] Re: serial properties
 
 
  I agree that they are very handy.  They become a major pain in
  the butt when you start doing replication between servers.
  For instance if you fail over to a standby server and you
  forget to update it's sequence first, merging data later
  becomes a nightmare.  I'd like to have int8 sequences and
  basically give each server it's own block of numbers to work
  with.
 
  Alex.
 
  On Fri, 2 Mar 2001, Gregory Wood wrote:
 
IMHO, automatically incremented number fields used for primary
 keys are
both a blessing and a curse.  It is almost always better to use
 some
other data that *means something* for a primary key.  If there's
 no
possible candidate key, *then* maybe an autonumber key is
 appropriate.
  
   Just wanted to say, I disagree strongly here (also MHO). I see
 quite a few
   benefits and very few drawbacks to using an auto-incrementing
 field for a
   primary key. In fact, the only drawback I can think of would be
 that it
   takes up a little more space per record to add a field used solely
 to
   uniquely identify that record. I can think of several drawbacks to
 a
   non-auto-incrementing primary key though:
  
   1. Less efficient joins. Comparing integers is about as easy as it
 gets...
   text, char, and varchar require string comparisons, while floating
 point
   numbers are not good as keys because of rounding errors.
   2. Discourages value changes. A value that "means something" might
 need to
   be modified in some manner. Sure you can define foreign keys with
 CASCADEs,
   but if you are using an auto-increment, you don't need to!
   3. No value is guaranteed to be unique (well, when doing an INSERT
 or
   UPDATE... it only gets into the database if it *is* unique) unless
 all
   queries go through a critical section. To the best of my
 knowledge, the only
   way to do this inside the database is to use nextval either
 implicitly or
   explicitly.
  
   The only time I don't use auto-incrementing fields is when I have
 a
   many-to-many join table with two foreign keys that are both
   auto-incrementing fields, in which case the primary key is a
 combination of
   those two fields. Other than a bit of extra space, I don't see any
 reason
   not to.
  
   Greg
  
  
   ---(end of
 broadcast)---
   TIP 2: you can get off all lists at once with the unregister
 command
   (send "unregister YourEmailAddressHere" to
 [EMAIL PROTECTED])
  
 
 
  ---(end of
 broadcast)---
  TIP 1: subscribe and unsubscribe commands go to
 [EMAIL PROTECTED]
 
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] What are your settings for SHMMAX and max files?

2001-03-02 Thread adb

Hi, I'm trying to figure out what some reasonable settings would
be for kernel parameters like shared memory and max open files.
I've read the section of the manual but it doesn't seem to give
any rule of thumb based on number of users or frequency of queries.

With my load testing I definitely bump up against the max open files
problem but I'm not sure about the shared memory.

I'm running linux (redhat 6.1)

If you have a high volume production system, can you post your
parameter settings and anything you tweak at startup?

Thanks,

Alex.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] how to examin the database structure with an sql command

2001-03-02 Thread will trillich

On Tue, Feb 27, 2001 at 05:11:07PM +0100, Hans Jeuken wrote:
 
 I am busy writing a table editor in Perl, using the module Pg.pm as 
 interface to Postgresql. When opening a table in that editor I would like 
 to find out how the tables in the database are defined.
 
 So far I have considered doing a pg_dump to a file and then examine the 
 content, but it is so unelegant.
 
 then I have considered the query "SELECT  * FROM pga_layout". Problem here 
 is that the table pga_layout only not defined when the database has no 
 tuples yet.
 
 Who is able to give me the big hint here?

those pga_* tables are created by PGACCESS, which is a
nearly-ready-for-prime-time xwindows postgresql interface.

two options: peruse the tcl code of pgaccess to see how they do
it, or wait for some guru here to reveal the inner secrets you
seek...

-- 
It is always hazardous to ask "Why?" in science, but it is often
interesting to do so just the same.
-- Isaac Asimov, 'The Genetic Code'

[EMAIL PROTECTED]
http://groups.yahoo.com/group/newbieDoc -- we need your brain!
http://www.dontUthink.com/ -- your brain needs us!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])