Re: [GENERAL] Using rowtype parameter

2006-01-20 Thread Peter Zeltins

ANYELEMENT only matches scalar types.  I don't think we have any support
for accepting an arbitrary row type as a function argument.  There's
been some speculation about allowing RECORD to do that, but it's not
done.


OK, that clears it up. RECORD would actually do nicely. Right now I'll be 
forced to pass varchar/text arrays which can potentially lead to type 
conversion issues.


Peter 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Create User

2006-01-20 Thread DB Subscriptions

Hi,

I have this table:

CREATE TABLE users
(
 userid varchar(100) NOT NULL,
 nama varchar(50) NOT NULL,
 pword varchar(255) NOT NULL,
 groupe varchar(7) NOT NULL,
 rolle int2 NOT NULL DEFAULT 2,
 statux varchar(9) NOT NULL DEFAULT 'Active'::character varying,
 CONSTRAINT users_pkey PRIMARY KEY (userid)
)
WITHOUT OIDS;

I created a trigger to create a user based on the new insert into the 
table as follows:


CREATE OR REPLACE FUNCTION users_insert()
 RETURNS trigger AS
$BODY$
BEGIN
   CREATE USER NEW.userid WITH PASSWORD NEW.pword IN GROUP NEW.groupe;
 
   RETURN new;

END;

$BODY$
 LANGUAGE 'plpgsql' VOLATILE;

Surprisingly, I get this error message:

ERROR:  syntax error at or near $1 at character 14
QUERY:  CREATE USER  $1  WITH PASSWORD  $2  IN GROUP  $3
CONTEXT:  SQL statement in PL/PgSQL function users_insert near line 10

I would appreciate your guidance.

Cheers.

Chris.



___ 
Yahoo! Photos – NEW, now offering a quality print service from just 8p a photo http://uk.photos.yahoo.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] psql: FATAL: password authentication failed for user xxxx

2006-01-20 Thread Steve Taylor
Hi.

I installed PostgreSQL 8.1.2 on Windows XP Pro. I started psql (via the start
menu) and created a user. Then I exited psql using \q. Then I tried to start it
again and got:
psql: FATAL:  password authentication failed for user postgres
and the command window closed.

I don't understand what is going on. Is this some kind of bug? I have searched
the mailing list archives and found dozens upon dozens of people reporting this
error message, but it's either for a different reason or where it's the same
situation as mine, the message goes unreplied.

The really weird thing is that I did a full uninstall (removed it in Add/Remove
programs AND deleted the directory \Program Files\PostgreSQL) and now psql
won't work AT ALL. I'm just trying to get started. I just want to create ONE
table so I can play around with jdbc.

Why the hell doesn't psql just startup and ask for a username and password? That
makes so much sense that it will never happen.

- Steve

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Bizgres 0.8 vs. 8.1.2

2006-01-20 Thread Hannes Dorbath
Is my impression correct that 8.1.x has almost all features previously 
only available in Bizgres? Besides the Java Loader.. any arguments to go 
with the Bizgres distribution instead?



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] create plperlu langage fails

2006-01-20 Thread FERREIRA, William (VALTECH)

i have 2 versions of perl installed : 5.00 and 5.8.7
i added perl 5.8.7 executable in the PATH environnement variable.
During postgresql configure, the good version of perl is found.

does I need to add in the path some libraries (like libperl.so) or does it find 
it using the perl executable ?

i'm working on sun-solaris 2.8

regards

-Message d'origine-
De : Michael Fuhr [mailto:[EMAIL PROTECTED]
Envoyé : jeudi 19 janvier 2006 18:30
À : FERREIRA, William (VALTECH)
Cc : pgsql-general@postgresql.org
Objet : Re: [GENERAL] create plperlu langage fails



On Thu, Jan 19, 2006 at 05:22:23PM +0100, FERREIRA, William (VALTECH) wrote:
 ERROR:  could not load library 
 /home/adocv3/postgresql-8.1.2/src/pl/plperl/libplperl.so: ld.so.1: 
 /opt/pgsql/bin/postgres: fatal: relocation error: file 
 /home/adocv3/postgresql-8.1.2/src/pl/plperl/libplperl.so: symbol 
 perl_call_pv: referenced symbol not found

Was your libplperl.so built against an older version of Perl than
your libperl.so was?  In recent versions of Perl perl_call_pv is a
macro that ultimately expands to Perl_call_pv, but in older versions
perl_call_pv was a symbol itself.

How did you obtain libplperl.so -- did you built it yourself or
install it from a package?  Do you have multiple versions of Perl
installed?  What platform are you using?

--
Michael Fuhr


This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.


This e-mail is intended only for the above addressee. It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it. If you have
received it in error please delete it and immediately notify the sender.
Security Notice: all e-mail, sent to or from this address, may be
accessed by someone other than the recipient, for system management and
security reasons. This access is controlled under Regulation of
Investigatory Powers Act 2000, Lawful Business Practises.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Create User

2006-01-20 Thread Pandurangan R S
I hope this error arises when you do a insert.
Can u post your insert statement that caused this error?

On 1/20/06, DB Subscriptions [EMAIL PROTECTED] wrote:
 Hi,

 I have this table:

 CREATE TABLE users
 (
   userid varchar(100) NOT NULL,
   nama varchar(50) NOT NULL,
   pword varchar(255) NOT NULL,
   groupe varchar(7) NOT NULL,
   rolle int2 NOT NULL DEFAULT 2,
   statux varchar(9) NOT NULL DEFAULT 'Active'::character varying,
   CONSTRAINT users_pkey PRIMARY KEY (userid)
 )
 WITHOUT OIDS;

 I created a trigger to create a user based on the new insert into the
 table as follows:

 CREATE OR REPLACE FUNCTION users_insert()
   RETURNS trigger AS
 $BODY$
 BEGIN
 CREATE USER NEW.userid WITH PASSWORD NEW.pword IN GROUP NEW.groupe;

 RETURN new;
 END;

 $BODY$
   LANGUAGE 'plpgsql' VOLATILE;

 Surprisingly, I get this error message:

 ERROR:  syntax error at or near $1 at character 14
 QUERY:  CREATE USER  $1  WITH PASSWORD  $2  IN GROUP  $3
 CONTEXT:  SQL statement in PL/PgSQL function users_insert near line 10

 I would appreciate your guidance.

 Cheers.

 Chris.



 ___
 Yahoo! Photos – NEW, now offering a quality print service from just 8p a 
 photo http://uk.photos.yahoo.com

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Initdb panic: invalid record offset at 0/0 creating template1]

2006-01-20 Thread DANTE ALEXANDRA

Hello,

I work with Agnès Bocchino who have posted a message on the NOVICE 
mailing-list on an initdb error.

Maybe we must post this message in the GENERAL mailing-list.
I try, hoping someone knows this error.

Regards,
Alexandra DANTE

 Message original 
Sujet: 	[NOVICE] Initdb panic: invalid record offset at 0/0 creating 
template1

Date:   Fri, 20 Jan 2006 07:50:32 +0100
De: Agnes Bocchino [EMAIL PROTECTED]
Pour:   [EMAIL PROTECTED]



Hi,

we recompiled and built an RPM on IA64, release of postgresql : 8.1.1, 
on RHEL4 update 2,

the installation of the rpm seem to be good,
we install with --nodeps , and we indicate the path for the library's  
/opt/intel_cc_80/lib

but when trying to init with the user account pg_811,
it fall in panic,
our too ...we don't know what could be wrong,
is there a link with shared memory of our system ?
thanks for help


_here is the error :

[EMAIL PROTECTED] PGS]$ initdb -D /home/PGS/V811
The files belonging to this database system will be owned by user pg_811.
This user must also own the server process.

The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.

fixing permissions on existing directory /home/PGS/V811 ... ok
creating directory /home/PGS/V811/global ... ok
creating directory /home/PGS/V811/pg_xlog ... ok
creating directory /home/PGS/V811/pg_xlog/archive_status ... ok
creating directory /home/PGS/V811/pg_clog ... ok
creating directory /home/PGS/V811/pg_subtrans ... ok
creating directory /home/PGS/V811/pg_twophase ... ok
creating directory /home/PGS/V811/pg_multixact/members ... ok
creating directory /home/PGS/V811/pg_multixact/offsets ... ok
creating directory /home/PGS/V811/base ... ok
creating directory /home/PGS/V811/base/1 ... ok
creating directory /home/PGS/V811/pg_tblspc ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 1000
creating configuration files ... ok
creating template1 database in /home/PGS/V811/base/1 ... PANIC:  invalid 
record offset at 0/0

child process was terminated by signal 6
initdb: removing contents of data directory /home/PGS/V811


_and in mode debug the end of  debugs messages are :_


DEBUG:  TZ Etc/GMT+3 scores 0: at 1074121200 2004-01-14 20:00:00 std 
versus 2004-01-15 00:00:00 std
DEBUG:  TZ Etc/UCT scores 0: at 1074121200 2004-01-14 23:00:00 std 
versus 2004-01-15 00:00:00 std
DEBUG:  TZ Etc/UTC scores 0: at 1074121200 2004-01-14 23:00:00 std 
versus 2004-01-15 00:00:00 std
DEBUG:  TZ Etc/GMT-12 scores 0: at 1074121200 2004-01-15 11:00:00 std 
versus 2004-01-15 00:00:00 std
DEBUG:  TZ Etc/GMT-4 scores 0: at 1074121200 2004-01-15 03:00:00 std 
versus 2004-01-15 00:00:00 std

DEBUG:  invoking IpcMemoryCreate(size=11083776)
LOG:  database system was shut down at 2006-01-20 07:13:57 CET
LOG:  invalid primary checkpoint link in control file
PANIC:  invalid record offset at 0/0
child process was terminated by signal 6
initdb: removing contents of data directory /home/PGS/V811
[EMAIL PROTECTED] PGS]$


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] create plperlu langage fails

2006-01-20 Thread FERREIRA, William (VALTECH)

yes !
i found my problem
i added libperl.so (from perl 5.5.7) in LD_LIBRARY_PATH and now it works
thanks

Will

-Message d'origine-
De : Michael Fuhr [mailto:[EMAIL PROTECTED]
Envoyé : jeudi 19 janvier 2006 18:30
À : FERREIRA, William (VALTECH)
Cc : pgsql-general@postgresql.org
Objet : Re: [GENERAL] create plperlu langage fails



On Thu, Jan 19, 2006 at 05:22:23PM +0100, FERREIRA, William (VALTECH) wrote:
 ERROR:  could not load library 
 /home/adocv3/postgresql-8.1.2/src/pl/plperl/libplperl.so: ld.so.1: 
 /opt/pgsql/bin/postgres: fatal: relocation error: file 
 /home/adocv3/postgresql-8.1.2/src/pl/plperl/libplperl.so: symbol 
 perl_call_pv: referenced symbol not found

Was your libplperl.so built against an older version of Perl than
your libperl.so was?  In recent versions of Perl perl_call_pv is a
macro that ultimately expands to Perl_call_pv, but in older versions
perl_call_pv was a symbol itself.

How did you obtain libplperl.so -- did you built it yourself or
install it from a package?  Do you have multiple versions of Perl
installed?  What platform are you using?

--
Michael Fuhr


This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.


This e-mail is intended only for the above addressee. It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it. If you have
received it in error please delete it and immediately notify the sender.
Security Notice: all e-mail, sent to or from this address, may be
accessed by someone other than the recipient, for system management and
security reasons. This access is controlled under Regulation of
Investigatory Powers Act 2000, Lawful Business Practises.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] mount -o async - is it safe?

2006-01-20 Thread Shane Wright
Hi Tom,

  If we turn sync off, surely PostgreSQL keeps the data consistent, ext3
  journalling  keeps the filesystem clean [assuming other mount options
  left at defaults], and then everything should be ok with either a server
  crash, power failure, storage failure, whatever.  right?

 I checked around with some of Red Hat's kernel folk, and the bottom line
 seems to be that it's OK as long as you trust the hardware:

fabulous, thanks :)

 : Question is, can fsync(2) be trusted to behave properly, ie, not return
 : until all writes are down to disk, if the SAN is mounted -o async ?
 :
 : async is the default, which is the whole point of having things like
 : fsync, fdatasync, O_DIRECT, etc.  You can trust fsync as far as you can
 : trust the hardware.  The call will not return until the SAN says the
 : data has been written.
 :
 : In reality, the SAN is probably buffering these writes (possibly into
 : SRAM or battery-backed RAM), and the disks are probably buffering them
 : again, but you've got redundant power supplies and UPSs, right?

that sounds true (and it has) - but presumably this is the case whether we 
mount -o sync or not?   I.e. if its going to buffer, then its going to do so 
whether its postgres or the kernel sync'ing the writes?

(specifically that the SAN likely buffers anyway - IMO having to trust the 
hardware to some degree is a given ;)

Cheers

Shane


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SELECT Rules or stored procedure

2006-01-20 Thread Andrew - Supernews
On 2006-01-19, Jim C. Nasby [EMAIL PROTECTED] wrote:
 On Tue, Jan 17, 2006 at 09:55:42PM -0700, Assad Jarrahian wrote:
 q2) How does one get access to the rows just selected in the CREATE
 RULE computation?

 Via NEW and OLD. Read Chapter 34 of the documentation.

He's talking about a SELECT rule. A SELECT rule is no more and no less than
a view - you can do nothing with select rules that you can't do with a
normal view.

(You can only have one SELECT rule, it must be named _RETURN, and it must
be a DO INSTEAD SELECT rule. If you put such a rule on a plain table, you
will find that the table changes into a view.)

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] psql: FATAL: password authentication failed for user xxxx

2006-01-20 Thread Steve Taylor
I solved this, but it's still weird. I uninstalled again, this time deleting
the
windows user account that PostgreSQL created. So when I reinstalled everything
was ok.

Now I've realised that PostgreSQL is being very stupid about not allowing
logins
via psql from a windows administrator account. So I'm just running it as the
guest user. But I still don't understand why psql works when run in a windows
administrator account, but stops working as such when I add another user. This
has me really baffled. Can someone please enlighten me (and the rest of us)?
- Steve

Quoting Steve Taylor [EMAIL PROTECTED]:

 Hi.
 
 I installed PostgreSQL 8.1.2 on Windows XP Pro. I started psql (via the
 start
 menu) and created a user. Then I exited psql using \q. Then I tried to start
 it
 again and got:
 psql: FATAL:  password authentication failed for user postgres
 and the command window closed.
 
 I don't understand what is going on. Is this some kind of bug? I have
 searched
 the mailing list archives and found dozens upon dozens of people reporting
 this
 error message, but it's either for a different reason or where it's the same
 situation as mine, the message goes unreplied.
 
 The really weird thing is that I did a full uninstall (removed it in
 Add/Remove
 programs AND deleted the directory \Program Files\PostgreSQL) and now psql
 won't work AT ALL. I'm just trying to get started. I just want to create ONE
 table so I can play around with jdbc.
 
 Why the hell doesn't psql just startup and ask for a username and password?
 That
 makes so much sense that it will never happen.
 
 - Steve
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


R: [GENERAL] JDBC query creates a suspended Linux process

2006-01-20 Thread P . Rizzi Ag . Mobilità Ambiente
I discovered myself what was causing this, but still I don't know
exactly why it was happening.
The table I was talking about has some 30.000 records, not that much,
but each insert triggers a pl/pgSQL function that does a few things
before actually inserting and it contains a few raise info statements
for debugging purposes.
Running the query from pgAdmin the output of those raise info are
displayed in the pgAdmin message window and the query runs smoothly.
Running the query from JDBC the output of those raise infos is not
displayed but is stored somewhere inside the JDBC driver.
Removing the raise info, the query runs smoothly even from JDBC!!!
Problem solved and a lesson learned...

Bye
Paolo Rizzi


 -Messaggio originale-
 Da: P.Rizzi Ag.Mobilità Ambiente [mailto:[EMAIL PROTECTED]
 Inviato: mercoledì 18 gennaio 2006 18.01
 A: PostgreSQL pgsql-general (E-mail)
 Oggetto: [GENERAL] JDBC query creates a suspended Linux process
 
 
 Hi everybody, this is my first post on this list.
 I searched the archives but couldn't find a solution for this.
 
 I have a PostgreSQL+PostGIS server installed on Linux with this
 configuration:
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC gcc 
 (GCC) 3.3.1
 (SuSE Linux)
 Linux is a SuSE 9.0 and the box is a two Xeon processor with 
 2GB RAM and two
 SATA HD
 configured in RAID 1 through an hardware adapter using ReiserFS.
 
 Now if I issue the query detailed below to this server using 
 pgAdmin running
 on Windows, the query takes a reasonable time to complete (around 12
 seconds).
 If I issue the same query from a Java application using JDBC, 
 the same query
 takes
 a very long time and this happens both if I ran the Java 
 program from the
 same Windows
 machine or from the same Linux box where the PostgreSQL 
 server is running.
 If I issue a ps command I can see that the Linux process 
 running my query is
 suspended
 most of the time:
   postgres 26992  1.9  0.9 36784 20040 ?   S18:13   0:13 
   postgres: postgres prova_sv3
 192.1.1.36(1273) INSERT
 
 For example in this case I run the query more then 10 minutes 
 ago, but it
 actually
 ran for only 13 seconds!!!
 
 The query is like this:
 
 begin;
 SELECT sv_r.sv_ver_remarks('commit');
 INSERT INTO sv_r.Corsie
SELECT
 tipoCorsie_codice,posizione_codice,segm_progInizio,segm
 _progFine,se
 nso_elementoStradale_id,senso_sensoMarcia,ord,corsieDise
 gnate,numero
 Corsie,larghezzaMediaCorsie,geom,sv_ver
   FROM sv_d.Corsie;
 commit;
 
 SELECT sv_r.sv_ver_remarks('commit'); invokes a simple 
 pl/pgSQL function.
 
 INSERT INTO sv_r.Corsie instead invokes a pl/pgSQL 
 trigger function on
 each
 row that does a certain amount of work.
 
SELECT ...  FROM sv_d.Corsie; reads from a complex view.
 
 So the query is complex, but why it takes 12 seconds from pgAdmin and
 forever from JDBC???
 I'm pretty sure to be correctly using a single transaction from JDBC.
 I'm not an expert with PostgreSQL nor I am a Linux guru so maybe is
 something really stupid...
 Thanks a lot in advance to anybody who'll help me!!!
 
 Bye
 Paolo Rizzi
 
 
 
 
 AVVERTENZE AI SENSI DEL D. LGS. 196/2003  
 Le informazioni contenute in questo messaggio di posta 
 elettronica e/o nel/i
 file/s allegato/i, sono da considerarsi strettamente 
 riservate. Il loro
 utilizzo è consentito esclusivamente al destinatario del 
 messaggio, per le
 finalità indicate nel messaggio stesso. Qualora riceveste 
 questo messaggio
 senza esserne il destinatario, Vi preghiamo cortesemente di 
 darcene notizia
 via e-mail e di procedere alla distruzione del messaggio stesso,
 cancellandolo dal Vostro sistema; costituisce comportamento 
 contrario ai
 principi dettati dal D. Lgs. 196/2003 il trattenere il 
 messaggio stesso,
 divulgarlo anche in parte, distribuirlo ad altri soggetti, 
 copiarlo, od
 utilizzarlo per finalità diverse.
 
 ---(end of 
 broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 




AVVERTENZE AI SENSI DEL D. LGS. 196/2003  
Le informazioni contenute in questo messaggio di posta elettronica e/o nel/i
file/s allegato/i, sono da considerarsi strettamente riservate. Il loro
utilizzo è consentito esclusivamente al destinatario del messaggio, per le
finalità indicate nel messaggio stesso. Qualora riceveste questo messaggio
senza esserne il destinatario, Vi preghiamo cortesemente di darcene notizia
via e-mail e di procedere alla distruzione del messaggio stesso,
cancellandolo dal Vostro sistema; costituisce comportamento contrario ai
principi dettati dal D. Lgs. 196/2003 il trattenere il messaggio stesso,
divulgarlo anche in parte, distribuirlo ad altri soggetti, copiarlo, od
utilizzarlo per finalità diverse.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] logging connections

2006-01-20 Thread surabhi.ahuja
Title: logging connections






i want to know, how i can log connections to postgres.

the sample log file is:
LOG: shutting down
LOG: database system is shut down
LOG: database system was shut down at 2006-01-17 18:18:24 CST
LOG: checkpoint record is at 0/B035D0
LOG: redo record is at 0/B035D0; undo record is at 0/0; shutdown TRUE
LOG: next transaction ID: 1267; next OID: 17728
LOG: database system is ready
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection
LOG: received fast shutdown request
LOG: aborting any active transactions
FATAL: terminating connection due to administrator command
LOG: shutting down
LOG: database system is shut down
LOG: database system was shut down at 2006-01-20 11:00:00 CST
LOG: checkpoint record is at 0/3C339CB4
LOG: redo record is at 0/3C339CB4; undo record is at 0/0; shutdown TRUE
LOG: next transaction ID: 2283381; next OID: 1159413
LOG: database system is ready
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection



however, i am still not able to know how to log into the above file,
who has connected etc


if i can log something like  LOG: connection received: host=client1
 port=3775

and also if i shut down postmaster, using pg_ctl stop, no logging takes place to the above log file.

How to enable the above,

thanks,
regards
Surabhi



-Original Message-
From: [EMAIL PROTECTED] on behalf of Tom Lane
Sent: Fri 1/20/2006 3:13 AM
To: Nik
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Connections not closing

***
Your mail has been scanned by iiitb VirusWall.
***-***


Nik [EMAIL PROTECTED] writes:
 Ok, I simplified the problem. I tried just running psql from the
 command line, and I noticed that it opens two connection on two
 different ports, and it closes only one.
 For example I do the following in the command prompt:

 C:\ psql -h host_name -p 5432 -d db_name -U user_name
 Password:

 2006-01-19 09:50:29 [unknown] LOG: connection received: host=client1
 port=3775

 2006-01-19 09:50:31 [unknown] LOG: connection received: host=client1
 port=3778

 2006-01-19 09:50:31 test LOG: connection authorized: user=user_name
 database=db_name

It tries to connect, gets told it needs a password (the log verbosity
level is not high enough to record the rejection), asks you for the
password, and connects again. I don't see anything funny here.

   regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly









Re: [GENERAL] Change owner of all database objects

2006-01-20 Thread Andrus
 You can find some plpgsql functions here:

 http://pgedit.com/tip/postgresql/access_control_functions


 John, thank you.

I found that acl_admin.grant_on_all() does not grant privileges on schema:
it does not execute

GRANT ALL ON SCHEMA myschema TO myrole;

statements.

So I must execute GRANT ALL ON SCHEMA  commands for each schema manually,
right ?

Andrus.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] autovacuum and temporary tables

2006-01-20 Thread Csaba Nagy
Hi all,

I have a postgres 8.0 installation, and I'm running autovacuum against
it. I have noticed that it processes temporary tables too, which is in
itself a bit curious, but the problem is that autovacuum might even
crash if a temporary table is suddenly gone while it tries to vacuum
it... that's what happened once here. I'm not sure what are the
necessary conditions to trigger this, it only happened once.

I searched in the release notes and googled a bit, but could not
(quickly) find what's the status of autovacuum vs. temporary tables in
8.1. I have seen that that was one of Tom's outstanding issues with
autovacuum to be included in the core, but was this resolved ?

Thanks,
Csaba.





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] out parameters and SETOF

2006-01-20 Thread Jaime Casanova
On 1/19/06, Michael Fuhr [EMAIL PROTECTED] wrote:
 On Thu, Jan 19, 2006 at 04:03:41PM -0500, Jaime Casanova wrote:
  there is a way to use OUT parameters in conjunction with SETOF?

 Do you want to return a set of the OUT parameters or a set of
 something else?  I don't think you can do the latter; for the former
 use SETOF record:

 CREATE FUNCTION foo(OUT x integer, OUT y integer)
 RETURNS SETOF record AS $$
 BEGIN
x := 1; y := 2; RETURN NEXT;
x := 3; y := 4; RETURN NEXT;
x := 4; y := 5; RETURN NEXT;
 END;
 $$ LANGUAGE plpgsql;

 SELECT * FROM foo();
  x | y
 ---+---
  1 | 2
  3 | 4
  4 | 5
 (3 rows)

 --
 Michael Fuhr


ok, thanks

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Sebastjan Trepca
Hi,I have a table like this:CREATE TABLE customer_mapping( Name varchar(128) NOT NULL, ID int8 NOT NULL) Data looks something like this:john 1
peter 1test 2george 3What I would like is to write a query where I can specify multiple names and get the IDs which have them.For now it seems the most efficient way is to use INTERSECT statement:
SELECT ID from customer_mapping WHERE Name='john' INTERSECT SELECT ID from customer_mapping WHERE Name='peter'Although, I don't know how exactly to use ORDER, OFFSET and LIMIT in this case...
Anyway, is there any better way of doing this? (I can't change the table structure.)Thanks, Sebastjan


Re: [GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Michael Glaesemann


On Jan 20, 2006, at 22:19 , Sebastjan Trepca wrote:

What I would like is to write a query where I can specify multiple  
names and get the IDs which have them.


For now it seems the most efficient way is to use INTERSECT statement:

SELECT ID from customer_mapping WHERE Name='john'
INTERSECT
SELECT ID from customer_mapping WHERE Name='peter'


My first thought is to use a join. Does this do what you want?

select id
from customer_mapping cm1
join customer_mapping cm2 using (ID)
where cm1.Name = 'john
and cm2.Name = 'peter';

Although, I don't know how exactly to use ORDER, OFFSET and LIMIT  
in this case...


ORDER, OFFSET and LIMIT should work just fine with the JOIN query.  
You could also use your intersect in a subquery and then use ORDER,  
OFFSET and LIMIT on the outer query, e.g.,


select *
from (
select ID...
intersect
select ID ...
) as common_names
...

Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Sebastjan Trepca
Wow, this joined query is super faster then intersect(10x), thanks a lot!!Regarding that I have to make a join for every term, I would think it would be more consuming. Is there any limit of joins or something similar which I should be aware of?
SebastjanOn 1/20/06, Michael Glaesemann [EMAIL PROTECTED] wrote:
On Jan 20, 2006, at 22:19 , Sebastjan Trepca wrote: What I would like is to write a query where I can specify multiple names and get the IDs which have them. For now it seems the most efficient way is to use INTERSECT statement:
 SELECT ID from customer_mapping WHERE Name='john' INTERSECT SELECT ID from customer_mapping WHERE Name='peter'My first thought is to use a join. Does this do what you want?
select idfrom customer_mapping cm1join customer_mapping cm2 using (ID)where cm1.Name = 'johnand cm2.Name = 'peter'; Although, I don't know how exactly to use ORDER, OFFSET and LIMIT
 in this case...ORDER, OFFSET and LIMIT should work just fine with the JOIN query.You could also use your intersect in a subquery and then use ORDER,OFFSET and LIMIT on the outer query, e.g.,
select *from (select ID...intersectselect ID ...) as common_names...Michael Glaesemanngrzm myrealbox com



Re: [GENERAL] How to convert Big5 to UTF8

2006-01-20 Thread Martijn van Oosterhout
On Fri, Jan 20, 2006 at 09:56:40AM +0800, Alex CS Huang wrote:
 Hi, All,
   I create a database which encoding is UTF8, and create a table test 
 create table test ( name varchar);
   I insert a data '\244\350' encoding is Big5, I wish translate them to
 UTF8, so I use 
   insert into test values ('convert('\244\350' using big5_to_utf_8));
   I got an error:
   ERROR:  invalid byte sequence for encoding UNICODE: 0xe8;
   
   How could I do for this problem? any ideas.

How about something like:

\set encoding=big5;
insert into test values ('\244\350');

or maybe: set client_encoding=big5;

within an app.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


[GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Stephan Vollmer
Hello!

I noticed that the creation of a GIST index for tsearch2 takes very
long - about 20 minutes. CPU utilization is 100 %, the resulting
index file size is ~25 MB. Is this behaviour normal?

Full text columns:   title  author_list
tsearch2 word lists: fti_title  fti_author_list
tsearch2 indexes:idx_fti_title  idx_fti_author_list

The table has 700,000 records. When I create a normal B-Tree index
on the same column for testing purposes, it works quite fast
(approx. 30 seconds).

The columns that should be indexed are small, only about 10 words on
average.

System specs:
Athlon64 X2 3800+, 2 GB RAM
PostgreSQL 8.1.2, Windows XP SP2

I've never noticed this problem before, so could it probably be
related to v8.1.2? Or is something in my configuration or table
definition that causes this sluggishness?

Thanks very much in advance for your help!

- Stephan



This is the table definition:
-
CREATE TABLE publications
(
  id int4 NOT NULL DEFAULT nextval('publications_id_seq'::regclass),
  publication_type_id int4 NOT NULL DEFAULT 0,
  keyword text NOT NULL,
  mdate date,
  year date,
  title text,
  fti_title tsvector,
  author_list text,
  fti_author_list tsvector,
  overview_timestamp timestamp,
  overview_xml text,
  CONSTRAINT publications_pkey PRIMARY KEY (keyword) USING INDEX
  TABLESPACE dblp_index,
  CONSTRAINT publications_publication_type_id_fkey FOREIGN KEY
 (publication_type_id)
  REFERENCES publication_types (id) MATCH SIMPLE
  ON UPDATE RESTRICT ON DELETE RESTRICT,
  CONSTRAINT publications_year_check CHECK (date_part('month'::text,
year) = 1::double precision AND date_part('day'::text, year) =
1::double precision)
)
WITHOUT OIDS;

CREATE INDEX fki_publications_publication_type_id
  ON publications
  USING btree
  (publication_type_id)
  TABLESPACE dblp_index;

CREATE INDEX idx_fti_author_list
  ON publications
  USING gist
  (fti_author_list)
  TABLESPACE dblp_index;

CREATE INDEX idx_fti_title
  ON publications
  USING gist
  (fti_title)
  TABLESPACE dblp_index;

CREATE INDEX idx_publications_year
  ON publications
  USING btree
  (year)
  TABLESPACE dblp_index;

CREATE INDEX idx_publications_year_part
  ON publications
  USING btree
  (date_part('year'::text, year))
  TABLESPACE dblp_index;


CREATE TRIGGER tsvectorupdate_all
  BEFORE INSERT OR UPDATE
  ON publications
  FOR EACH ROW
  EXECUTE PROCEDURE multi_tsearch2();




signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Stephan Vollmer
PS:

What I forgot to mention was that inserting records into the table
is also about 2-3 times slower than before (most likely due to the
slow index update operations).

I dropped the whole database and restored the dumpfile, but the
result it the same. When the index is recreated after COPYing the
data, it takes more than 20 minutes for _each_ of both tsearch2
indexes. So the total time to restore this table is more than 45
minutes!

- Stephan



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Create User

2006-01-20 Thread DB Subscriptions

Thanks Pandurangan.

The Function could not even be saved or created. The error is not at the 
insert level but at the creation of the trigger function.


Regards.


Pandurangan R S wrote:


I hope this error arises when you do a insert.
Can u post your insert statement that caused this error?

On 1/20/06, DB Subscriptions [EMAIL PROTECTED] wrote:
 


Hi,

I have this table:

CREATE TABLE users
(
 userid varchar(100) NOT NULL,
 nama varchar(50) NOT NULL,
 pword varchar(255) NOT NULL,
 groupe varchar(7) NOT NULL,
 rolle int2 NOT NULL DEFAULT 2,
 statux varchar(9) NOT NULL DEFAULT 'Active'::character varying,
 CONSTRAINT users_pkey PRIMARY KEY (userid)
)
WITHOUT OIDS;

I created a trigger to create a user based on the new insert into the
table as follows:

CREATE OR REPLACE FUNCTION users_insert()
 RETURNS trigger AS
$BODY$
BEGIN
   CREATE USER NEW.userid WITH PASSWORD NEW.pword IN GROUP NEW.groupe;

   RETURN new;
END;

$BODY$
 LANGUAGE 'plpgsql' VOLATILE;

Surprisingly, I get this error message:

ERROR:  syntax error at or near $1 at character 14
QUERY:  CREATE USER  $1  WITH PASSWORD  $2  IN GROUP  $3
CONTEXT:  SQL statement in PL/PgSQL function users_insert near line 10

I would appreciate your guidance.

Cheers.

Chris.



___
Yahoo! Photos – NEW, now offering a quality print service from just 8p a photo 
http://uk.photos.yahoo.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org

   





___ 
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] mac os x compile failure

2006-01-20 Thread Neil Brandt
Thanks for the help, Tom, and others who made suggestions.

I compiled and installed 8.0.6 with no problems on OS X 10.2.8

My little old imac's a happy postgres host now.

Neil

--- Tom Lane [EMAIL PROTECTED] wrote:

 Neil Brandt [EMAIL PROTECTED] writes:
  fd.c: In function `pg_fsync_writethrough':
  fd.c:271: `F_FULLFSYNC' undeclared (first use in this function)
  fd.c:271: (Each undeclared identifier is reported only once
  fd.c:271: for each function it appears in.)
 
 Hmm.  This is our bug: the code mistakenly supposes that every version
 of OS X has that symbol, whereas evidently it was introduced in 10.3.
 
 I'll try to see that this gets fixed for PG 8.1.3, but in the short run
 you might be best off to update your OS X installation, or revert to
 PG 8.0.* which doesn't try to use FULLFSYNC at all.
 
 If you'd really like to stay on OS X 10.2, please consider joining the
 buildfarm
 http://www.pgbuildfarm.org/index.html
 so that any other such problems will be caught promptly.  We have
 buildfarm members running 10.3 and 10.4, but nobody covering 10.2.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 







__ 
Find your next car at http://autos.yahoo.ca

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Initdb panic: invalid record offset at 0/0 creating template1]

2006-01-20 Thread Tom Lane
DANTE ALEXANDRA [EMAIL PROTECTED] writes:
 we recompiled and built an RPM on IA64, release of postgresql : 8.1.1, 
 on RHEL4 update 2,
 the installation of the rpm seem to be good,
 we install with --nodeps , and we indicate the path for the library's  
 /opt/intel_cc_80/lib
 but when trying to init with the user account pg_811,
 it fall in panic,

Whose RPM did you use, and did you use any special options?  Why did
you feel it necessary to use --nodeps?

 DEBUG:  invoking IpcMemoryCreate(size=11083776)
 LOG:  database system was shut down at 2006-01-20 07:13:57 CET
 LOG:  invalid primary checkpoint link in control file
 PANIC:  invalid record offset at 0/0
 child process was terminated by signal 6

Hm, I wonder what's getting written into the files ... would you run
initdb with --noclean and then post the results of
* pg_controldata $PGDATA
* od -x $PGDATA/pg_xlog/0001
(I'm assuming that the file is there and od won't produce much output
... if it comes to megabytes don't post it ...)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] autovacuum and temporary tables

2006-01-20 Thread Matthew T. O'Connor
Exactly which version of 8.0.x?  There was a bug fixed around 8.0.5 or 
so Prevent core dump in contrib version of autovacuum when a table has 
been dropped.  Per report from daveg (not his patch, though).


The version of autovacuum in 8.1 is a fairly different beast than the 
contrib version, and since it was integrated, I think all of Tom's 
concerns were dealt with.  I highly recommend the 8.1 version over the 
contrib version for many reason.


Matt


Csaba Nagy wrote:

Hi all,

I have a postgres 8.0 installation, and I'm running autovacuum against
it. I have noticed that it processes temporary tables too, which is in
itself a bit curious, but the problem is that autovacuum might even
crash if a temporary table is suddenly gone while it tries to vacuum
it... that's what happened once here. I'm not sure what are the
necessary conditions to trigger this, it only happened once.

I searched in the release notes and googled a bit, but could not
(quickly) find what's the status of autovacuum vs. temporary tables in
8.1. I have seen that that was one of Tom's outstanding issues with
autovacuum to be included in the core, but was this resolved ?

Thanks,
Csaba.





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Create User

2006-01-20 Thread Neil Conway
On Fri, 2006-01-20 at 09:16 +0100, DB Subscriptions wrote:
 BEGIN
 CREATE USER NEW.userid WITH PASSWORD NEW.pword IN GROUP NEW.groupe;
   
 RETURN new;
 END;

You can't use PL/PgSQL variables in DDL commands. Try using EXECUTE:

EXECUTE 'CREATE USER ' || NEW.userid || '...';

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum and temporary tables

2006-01-20 Thread Csaba Nagy
It's version 8.0.almost3, meaning that I used the 8.0 stable CVS branch
just before 8.0.3 was released. I will upgrade this data base to 8.1.x
(the latest released version at the time of upgrade) soon, so if the 8.1
version has the temporary table thing fixed that would be very nice :-)

I also have an instance running 8.0.almost5, so if that has a fix for
the core dump, that would be also nice, cause that instance will not be
migrated to 8.1 very soon.

BTW, did you actually mean to use the 8.1 autovacuum with 8.0 data base
? I can't imagine how that would work :-)

Thanks,
Csaba.

On Fri, 2006-01-20 at 16:05, Matthew T. O'Connor wrote:
 Exactly which version of 8.0.x?  There was a bug fixed around 8.0.5 or 
 so Prevent core dump in contrib version of autovacuum when a table has 
 been dropped.  Per report from daveg (not his patch, though).
 
 The version of autovacuum in 8.1 is a fairly different beast than the 
 contrib version, and since it was integrated, I think all of Tom's 
 concerns were dealt with.  I highly recommend the 8.1 version over the 
 contrib version for many reason.
 
 Matt
 
 
 Csaba Nagy wrote:
  Hi all,
  
  I have a postgres 8.0 installation, and I'm running autovacuum against
  it. I have noticed that it processes temporary tables too, which is in
  itself a bit curious, but the problem is that autovacuum might even
  crash if a temporary table is suddenly gone while it tries to vacuum
  it... that's what happened once here. I'm not sure what are the
  necessary conditions to trigger this, it only happened once.
  
  I searched in the release notes and googled a bit, but could not
  (quickly) find what's the status of autovacuum vs. temporary tables in
  8.1. I have seen that that was one of Tom's outstanding issues with
  autovacuum to be included in the core, but was this resolved ?
  
  Thanks,
  Csaba.
  
  
  
  
  
  ---(end of broadcast)---
  TIP 2: Don't 'kill -9' the postmaster
  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] autovacuum and temporary tables

2006-01-20 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 I have a postgres 8.0 installation, and I'm running autovacuum against
 it. I have noticed that it processes temporary tables too, which is in
 itself a bit curious, but the problem is that autovacuum might even
 crash if a temporary table is suddenly gone while it tries to vacuum
 it... that's what happened once here.

8.0.what?  I'm wondering if you are missing this 8.0.5 fix:

2005-10-20 12:14  tgl

* contrib/pg_autovacuum/pg_autovacuum.c (REL8_0_STABLE): Prevent
core dump in contrib version of autovacuum when a table has been
dropped.  Per report from daveg (not his patch, though).

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Keary Suska
on 1/20/06 6:19 AM, [EMAIL PROTECTED] purportedly said:

 I have a table like this:
 
 CREATE TABLE customer_mapping
 (
 Name varchar(128) NOT NULL,
 ID int8 NOT NULL
 ) 
 
 Data looks something like this:
 
 john 1 
 peter1
 test  2
 george  3
 
 What I would like is to write a query where I can specify multiple names and
 get the IDs which have them.
 
 For now it seems the most efficient way is to use INTERSECT statement:
 
 SELECT ID from customer_mapping WHERE Name='john'
 INTERSECT 
 SELECT ID from customer_mapping WHERE Name='peter'
 
 Although, I don't know how exactly to use ORDER, OFFSET and LIMIT in this
 case... 
 
 Anyway, is there any better way of doing this? (I can't change the table
 structure.)

Maybe I'm a little thick this morning but can't you just do:

SELECT ID from customer_mapping WHERE Name='john' OR Name='peter' OR
Name='george' ORDER BY ID DESC

Result:
3
2
1

?

Keary Suska
Esoteritech, Inc.
Demystifying technology for your home or business


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Tom Lane
Stephan Vollmer [EMAIL PROTECTED] writes:
 I noticed that the creation of a GIST index for tsearch2 takes very
 long - about 20 minutes. CPU utilization is 100 %, the resulting
 index file size is ~25 MB. Is this behaviour normal?

This has been complained of before.  GIST is always going to be slower
at index-building than btree; in the btree case there's a simple optimal
strategy for making an index (ie, sort the keys) but for GIST we don't
know anything better to do than insert the keys one at a time.

However, I'm not sure that anyone's tried to do any performance
optimization on the GIST insert code ... there might be some low-hanging
fruit there.  It'd be interesting to look at a gprof profile of what the
backend is doing during the index build.  Do you have the ability to do
that, or would you be willing to give your data to someone else to
investigate with?  (The behavior is very possibly data-dependent, which
is why I want to see a profile with your specific data and not just some
random dataset or other.)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Create Produre for DROP row

2006-01-20 Thread Marcos

 BEGIN
 
  DELETE FROM .
 
 EXCEPTION
WHEN others THEN
   ...
 END;
 
 documentation:
 http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
 
 a list of errors:
 http://www.postgresql.org/docs/current/static/errcodes-appendix.html
 
 --
 regards,

Thanks.

When I use the EXCEPTION and I return a value numeric or text the
Postgresql shows a error of encoding :(... Invalid Encoding UTF-8 at...

This error only occurs when a EXCEPTION treated by me it's raised.

My database is in UTF8 encoding.

What's happen?

E.g:


EXCEPTION
  WHEN others THEN RETURN 'Error';

Marcos.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Initdb panic: invalid record offset at 0/0 creating

2006-01-20 Thread DANTE ALEXANDRA

Hello Tom,

We have generated a new rpm as we haven't found a 8.1.0 rpm for IA-64 on 
Red Hat Enterprise Linux 4 on the PostGreSQL web site. We have compiled 
PostGreSQL v8.1.0 and generated the rpm with the intel compiler icc.

In the spec file, we have used these options for ./configure :
./configure CC=/opt/intel_cc_80/bin/icc CFLAGS=-no-gcc -O2 -w 
-ansi_alias -D__ICC.


When we have tried to install the rpm generated, we have got an error an 
the shared library libimf.so.6 of the intel compiler. Consequently, we 
have launched the command : rpm -ivh --nodeps file.rpm

The error was

error: Failed dependencies:

   libimf.so.6()(64bit) is needed by postgresql-8.1.1-1.ia64

and has occured because the intel compiler wasn't installed from an rpm 
but from a tar.gz file.



Once PostGreSQL installed, we have tried to launch the initdb and got 
the error :

DEBUG:  invoking IpcMemoryCreate(size=11083776)
LOG:  database system was shut down at 2006-01-20 07:13:57 CET
LOG:  invalid primary checkpoint link in control file
PANIC:  invalid record offset at 0/0
child process was terminated by signal 6
initdb: removing contents of data directory /home/PGS/V811


In the .bash_profile  file of the user used to launched initdb, we 
have set the following variables :

PGDIR=/opt/pg_811/PGHOME
PGDATA=/home/PGS/V811
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:.
PATH=$PGDIR/bin:$PATH
LD_LIBRARY_PATH=$PGDIR/lib:/opt/intel_cc_80/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
export  PGDIR PGDATA PATH


I have launched the commands you have asked, and you will find below the 
results :


[EMAIL PROTECTED] ~]$ initdb --noclean*
Running in noclean mode.  Mistakes will not be cleaned up.
The files belonging to this database system will be owned by user pg_811.
This user must also own the server process.

The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.

fixing permissions on existing directory /home/PGS/V811 ... ok
creating directory /home/PGS/V811/global ... ok
creating directory /home/PGS/V811/pg_xlog ... ok
creating directory /home/PGS/V811/pg_xlog/archive_status ... ok
creating directory /home/PGS/V811/pg_clog ... ok
creating directory /home/PGS/V811/pg_subtrans ... ok
creating directory /home/PGS/V811/pg_twophase ... ok
creating directory /home/PGS/V811/pg_multixact/members ... ok
creating directory /home/PGS/V811/pg_multixact/offsets ... ok
creating directory /home/PGS/V811/base ... ok
creating directory /home/PGS/V811/base/1 ... ok
creating directory /home/PGS/V811/pg_tblspc ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 1000
creating configuration files ... ok
creating template1 database in /home/PGS/V811/base/1 ... PANIC:  invalid 
record offset at 0/0

child process was terminated by signal 6
initdb: data directory /home/PGS/V811 not removed at user's request
[EMAIL PROTECTED] ~]$

---

[EMAIL PROTECTED] V811]$ pg_controldata $PGDATA*
pg_control version number:812
Catalog version number:   200510211
Database system identifier:   4886687050337353727
Database cluster state:   shut down
pg_control last modified: Fri 20 Jan 2006 04:21:31 PM CET
Current log file ID:  0
Next log file segment:1
Latest checkpoint location:   0/20
Prior checkpoint location:0/0
Latest checkpoint's REDO location:0/20
Latest checkpoint's UNDO location:0/20
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  3
Latest checkpoint's NextOID:  1
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Time of latest checkpoint:Fri 20 Jan 2006 04:21:31 PM CET
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Date/time type storage:   floating-point numbers
Maximum length of locale name:128
LC_COLLATE:   en_US.UTF-8
LC_CTYPE: en_US.UTF-8
[EMAIL PROTECTED] V811]$

---

[EMAIL PROTECTED] V811]$ od -x $PGDATA/pg_xlog/0001*
000 d05d 0002 0001     
020  43db fffb 43d0  0100  
040 68f5 5b77      
060 0050  0030     
100   0020    0020 
120 0001  0003  2710  0001 
140     fffb 43d0  
160        
*
1

The result file is about 16MB... I don't post it, 

Re: [GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Sebastjan Trepca
No, because I need AND operator between the terms. Thanks anyway :)SebastjanOn 1/20/06, Keary Suska 
[EMAIL PROTECTED] wrote:on 1/20/06 6:19 AM, 
[EMAIL PROTECTED] purportedly said: I have a table like this: CREATE TABLE customer_mapping ( Name varchar(128) NOT NULL, ID int8 NOT NULL
 ) Data looks something like this: john 1 peter1 test2 george3 What I would like is to write a query where I can specify multiple names and
 get the IDs which have them. For now it seems the most efficient way is to use INTERSECT statement: SELECT ID from customer_mapping WHERE Name='john' INTERSECT
 SELECT ID from customer_mapping WHERE Name='peter' Although, I don't know how exactly to use ORDER, OFFSET and LIMIT in this case... Anyway, is there any better way of doing this? (I can't change the table
 structure.)Maybe I'm a little thick this morning but can't you just do:SELECT ID from customer_mapping WHERE Name='john' OR Name='peter' ORName='george' ORDER BY ID DESC
Result:321?Keary SuskaEsoteritech, Inc.Demystifying technology for your home or business---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] autovacuum and temporary tables

2006-01-20 Thread Matthew T. O'Connor

Csaba Nagy wrote:

It's version 8.0.almost3, meaning that I used the 8.0 stable CVS branch
just before 8.0.3 was released. I will upgrade this data base to 8.1.x
(the latest released version at the time of upgrade) soon, so if the 8.1
version has the temporary table thing fixed that would be very nice :-)

I also have an instance running 8.0.almost5, so if that has a fix for
the core dump, that would be also nice, cause that instance will not be
migrated to 8.1 very soon.

BTW, did you actually mean to use the 8.1 autovacuum with 8.0 data base
? I can't imagine how that would work :-)


No I didn't mean to use the 8.1 autovac with 8.0, that can't work.  
Please upgrade your 8.0.x to at least 8.0.5 and see if that fixes the 
problem. 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Richard Huxton

Keary Suska wrote:

Data looks something like this:

john 1 
peter1




Maybe I'm a little thick this morning but can't you just do:

SELECT ID from customer_mapping WHERE Name='john' OR Name='peter' OR
Name='george' ORDER BY ID DESC


Not quite. He's after ID that have *both* names, so ID=1 above because 
it has john AND peter.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] autovacuum and temporary tables

2006-01-20 Thread Csaba Nagy
OK, I have an 8.0.almost 5 installation which did not have any such
problems yet. The 8.0.~3 instance will soon be migrated to 8.1.latest,
so I will skip the 8.0.5 step, even if it only means install/restart/no
dump - after all I had a single crash in a few months of operation.

I take it granted that the crash situation on table drop is fixed. I
wonder if temporary tables are still added to the autovacuum list in 8.1
or not ?

Thanks,
Csaba.

On Fri, 2006-01-20 at 17:07, Matthew T. O'Connor wrote:
 No I didn't mean to use the 8.1 autovac with 8.0, that can't work.  
 Please upgrade your 8.0.x to at least 8.0.5 and see if that fixes the 
 problem. 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Initdb panic: invalid record offset at 0/0 creating template1]

2006-01-20 Thread Tom Lane
DANTE ALEXANDRA [EMAIL PROTECTED] writes:
 We have generated a new rpm as we haven't found a 8.1.0 rpm for IA-64 on 
 Red Hat Enterprise Linux 4 on the PostGreSQL web site. We have compiled 
 PostGreSQL v8.1.0 and generated the rpm with the intel compiler icc.
 In the spec file, we have used these options for ./configure :
 ./configure CC=/opt/intel_cc_80/bin/icc CFLAGS=-no-gcc -O2 -w 
 -ansi_alias -D__ICC.

Do you know that this compiler generates trustworthy code with those
options?  The contents of the pg_control file are clearly good according
to the dump from pg_controldata, and yet we have

 LOG:  invalid primary checkpoint link in control file
 PANIC:  invalid record offset at 0/0

The easiest explanation I can see for this is that the compiler has
gotten the XRecOffIsValid test at the top of ReadCheckpointRecord
(in src/backend/access/transam/xlog.c, line 4854 as of 8.1.2) backwards.
The first time through, with the perfectly valid primary checkpoint
location (0/20) it mistakenly decides the value is not valid and prints
the LOG message.  This leads to a second call with the invalid prior
checkpoint location (0/0), when it mistakenly falls through and calls
ReadRecord, which properly PANICs.  Given that ReadRecord is using the
exact same macro to decide the offset is invalid (line 2668), it's hard
to conclude anything except a compiler bug.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Stephan Vollmer
Tom Lane wrote:
 Stephan Vollmer [EMAIL PROTECTED] writes:
 I noticed that the creation of a GIST index for tsearch2 takes very
 long - about 20 minutes. CPU utilization is 100 %, the resulting
 index file size is ~25 MB. Is this behaviour normal?
 
 This has been complained of before.  GIST is always going to be slower
 at index-building than btree; in the btree case there's a simple optimal
 strategy for making an index (ie, sort the keys) but for GIST we don't
 know anything better to do than insert the keys one at a time.

Ah, ok. That explains a lot, although I wonder why it is so much slower.


 However, I'm not sure that anyone's tried to do any performance
 optimization on the GIST insert code ... there might be some low-hanging
 fruit there.  It'd be interesting to look at a gprof profile of what the
 backend is doing during the index build.  Do you have the ability to do
 that, or would you be willing to give your data to someone else to
 investigate with?

Unfortunately, I'm not able to investigate it further myself as I'm
quite a Postgres newbie. But I could provide someone else with the
example table. Maybe someone else could find out why it is so slow.

I dropped all unnecessary columns and trimmed the table down to
235,000 rows. The dumped table (compressed with RAR) is 7,1 MB. I
don't have a website to upload it but I could send it to someone via
e-mail.

With this 235,000 row table, index creation times are:
- GIST347063 ms
- B-Tree2515 ms


Thanks for your help!

- Stephan




signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Martijn van Oosterhout
On Fri, Jan 20, 2006 at 10:35:21AM -0500, Tom Lane wrote:
 However, I'm not sure that anyone's tried to do any performance
 optimization on the GIST insert code ... there might be some low-hanging
 fruit there.  It'd be interesting to look at a gprof profile of what the
 backend is doing during the index build.  Do you have the ability to do
 that, or would you be willing to give your data to someone else to
 investigate with?  (The behavior is very possibly data-dependent, which
 is why I want to see a profile with your specific data and not just some
 random dataset or other.)

The cost on inserting would generally go to either penalty, or
picksplit. Certainly if you're inserting lots of values in a short
interval, I can imagine picksplit being nasty, since the algorithms for
a lot of datatypes are not really reknown for their speed.

I'm wondering if you could possibly improve the process by grouping
into larger blocks. For example, pull out enough tuples to cover 4
pages and then call picksplit three times to split it into the four
pages. This gives you 4 entries for the level above the leaves. Keep
reading tuples and splitting until you get enough for the next level
and call picksplit on those. etc etc.

The thing is, you never call penalty here so it's questionable whether
the tree will be as efficient as just inserting. For example, if have a
data type representing ranges (a,b), straight inserting can produce the
perfect tree order like a b-tree (assuming non-overlapping entries).
The above process will produce something close, but not quite...

Should probably get out a pen-and-paper to model this. After all, if
the speed of the picksplit increases superlinearly to the number of
elements, calling it will larger sets may prove to be a loss overall...

Perhaps the easiest would be to allow datatypes to provide a bulkinsert
function, like b-tree does? The question is, what should be its input
and output?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Tom Lane
Stephan Vollmer [EMAIL PROTECTED] writes:
 Unfortunately, I'm not able to investigate it further myself as I'm
 quite a Postgres newbie. But I could provide someone else with the
 example table. Maybe someone else could find out why it is so slow.

I'd be willing to take a look, if you'll send me the dump file off-list.

 I dropped all unnecessary columns and trimmed the table down to
 235,000 rows. The dumped table (compressed with RAR) is 7,1 MB. I
 don't have a website to upload it but I could send it to someone via
 e-mail.

Don't have RAR --- gzip or bzip2 is fine ...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] what am I doing wrong with this query?

2006-01-20 Thread Tony Caduto
select  array_to_string(conkey,',') from pg_constraint where contype = 
'p' and conrelid = 17059


returns a value of 1,2,18 for the array to string function,

when I do this it does not return true:

select case when 18 in (array_to_string(conkey,',')) then true else 
false end from pg_constraint where contype = 'p' and conrelid = 17059


but this one does return true

select case when 18 in (1,2,18) then true else false end from 
pg_constraint where contype = 'p' and conrelid = 17059


How come the function does not work in the IN statement?  I tried 
casting it to a varchar, but that did not work either.



Thanks,

Tony

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 The cost on inserting would generally go to either penalty, or
 picksplit. Certainly if you're inserting lots of values in a short
 interval, I can imagine picksplit being nasty, since the algorithms for
 a lot of datatypes are not really reknown for their speed.

Tut tut ... in the words of the sage, it is a capital mistake to
theorize in advance of the data.  You may well be right, but on the
other hand it could easily be something dumb like an O(N^2) loop over
a list structure.

I'll post some gprof numbers after Stephan sends me the dump.  We
should probably move the thread to someplace like pgsql-perform, too.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] what am I doing wrong with this query?

2006-01-20 Thread Tino Wildenhain
Tony Caduto schrieb:
 select  array_to_string(conkey,',') from pg_constraint where contype =
 'p' and conrelid = 17059
 
 returns a value of 1,2,18 for the array to string function,
 
 when I do this it does not return true:
 
 select case when 18 in (array_to_string(conkey,',')) then true else
 false end from pg_constraint where contype = 'p' and conrelid = 17059

When you try:

SELECT (array_to_string(conkey,',')); You will see
why. It basically produces:

('1,2,18') which isnt by far equivalent to (1,2,18)


 but this one does return true
 
 select case when 18 in (1,2,18) then true else false end from
 pg_constraint where contype = 'p' and conrelid = 17059
 
 How come the function does not work in the IN statement?  I tried
 casting it to a varchar, but that did not work either.

Its all horribly wrong ;)
1.) Try to avoid arrays in favour of real tables - queries are
easy and fast

2.) if you cant avoid, try WHERE 18 ANY conkey; or the like.
Look up the documentation for real syntax. Dont mix
text/char/varchar with what you type.

HTH
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] sequences not restoring properly

2006-01-20 Thread Brian Dimeler
I'm trying to transition a database from one server to another, the old one running Postgres 7.4.1 
and the new, 8.1.1. When I try to restore using a pg_dump plaintext sql file from a nightly backup 
via the usual


psql thedb  backup.sql

It works for the most part, but encounters several errors near the end when trying to create 
sequences. Also, it produces warnings about creating implicit sequences for tables with SERIAL 
keys, which strikes me as a bit odd because there's no need for implicit sequences when they're 
already explicitly defined in the database!


Looking back at the dump file though, I notice some discrepancies between what I see reported for 
the original database in phpPgAdmin and the sequences that are actually created. Specifically, it 
appears that any sequence that doesn't follow the naming convention postgres uses when 
auto-generating sequences, doesn't get created at all. Example:


I have a table 'bands' with a SERIAL primary key 'dbbandcode'. The sequence for this is defined in 
the original database as 'bands_dbcode_seq' and the default value for the key is:


 nextval('public.bands_dbcode_seq'::text)

In the database dump however, this default is omitted (and consequently, when restoring, the new 
server creates an implicit sequence named 'bands_dbbandcode_seq' instead of the correct 
bands_dbcode_seq, and it is never set to the correct value).


This happens for a few other tables too; basically anything that had its serial columns or tables 
renamed at some point doesn't get its sequences re-created.


So, why is this happening, and how do I fix it without having to manually modify the dump file 
before restoring? Is this just a bug in 7.4.1?


Thanks,
Brian


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] What is the maximum length of an IN(a,b,c....d) list in PostgreSQL

2006-01-20 Thread Michael Fuhr
On Fri, Jan 20, 2006 at 02:41:04PM +, frank church wrote:
 Is the value the actual length of the IN string, or is the maximum of the 
 comma
 separated exressions?

The number of expressions.  If you set max_expr_depth to 10 then
ten 1-character values cause an error but nine 1-character
values should be fine (tested on my 7.4.11 system).  I think the
same applies to 8.0 and later with max_stack_depth: the limit depends
on the number of expressions, not on the lengths of the elements.
At least that's what my tests seem to show.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sequences not restoring properly

2006-01-20 Thread Doug McNaught
Brian Dimeler [EMAIL PROTECTED] writes:

 I'm trying to transition a database from one server to another, the
 old one running Postgres 7.4.1 and the new, 8.1.1. When I try to
 restore using a pg_dump plaintext sql file from a nightly backup via
 the usual

 psql thedb  backup.sql

The best way to do this is to run the 8.1.1 pg_dump against the 7.4.1
server.  The new pg_dump will know better how to create a backup that
8.1.1 will like.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sequences not restoring properly

2006-01-20 Thread Brian Dimeler

Doing it that way doesn't produce any errors, but it still produces incorrect 
sequence names and values.

Doug McNaught wrote:


Brian Dimeler [EMAIL PROTECTED] writes:



I'm trying to transition a database from one server to another, the
old one running Postgres 7.4.1 and the new, 8.1.1. When I try to
restore using a pg_dump plaintext sql file from a nightly backup via
the usual

psql thedb  backup.sql



The best way to do this is to run the 8.1.1 pg_dump against the 7.4.1
server.  The new pg_dump will know better how to create a backup that
8.1.1 will like.

-Doug



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] sequences not restoring properly

2006-01-20 Thread Tom Lane
Brian Dimeler [EMAIL PROTECTED] writes:
 In the database dump however, this default is omitted (and consequently, when 
 restoring, the new 
 server creates an implicit sequence named 'bands_dbbandcode_seq' instead of 
 the correct 
 bands_dbcode_seq, and it is never set to the correct value).

 This happens for a few other tables too; basically anything that had its 
 serial columns or tables 
 renamed at some point doesn't get its sequences re-created.

Oh, they're created all right.  But they're created according to what
the name ought to be now given the new column name, and the setval()
commands in the old dump are wrong for that.

Per Doug's response, use the 8.1 pg_dump if you can, as it knows how to
generate setval() calls that can deal with this effect.  There's no
very good solution for it in 7.4 unfortunately --- if you want to use
the old pg_dump, you have to do the setvals by hand after loading the
dump.

Note that you'd have the same problem trying to reload that dump into
7.4 ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sequences not restoring properly

2006-01-20 Thread Joshua D. Drake

Brian Dimeler wrote:
Doing it that way doesn't produce any errors, but it still produces 
incorrect sequence names and values.
Are these sequences that you created by hand and then associated with a 
column? Versus using serial/bigserial types?


Sincerely,

Joshua D. Drake



Doug McNaught wrote:


Brian Dimeler [EMAIL PROTECTED] writes:



I'm trying to transition a database from one server to another, the
old one running Postgres 7.4.1 and the new, 8.1.1. When I try to
restore using a pg_dump plaintext sql file from a nightly backup via
the usual

psql thedb  backup.sql



The best way to do this is to run the 8.1.1 pg_dump against the 7.4.1
server.  The new pg_dump will know better how to create a backup that
8.1.1 will like.

-Doug



---(end of broadcast)---
TIP 6: explain analyze is your friend



--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] What is the maximum length of an IN(a,b,c....d) list in PostgreSQL

2006-01-20 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 The number of expressions.  If you set max_expr_depth to 10 then
 ten 1-character values cause an error but nine 1-character
 values should be fine (tested on my 7.4.11 system).  I think the
 same applies to 8.0 and later with max_stack_depth: the limit depends
 on the number of expressions, not on the lengths of the elements.
 At least that's what my tests seem to show.

Yeah, because the limit is associated with recursion depth in expression
processing.  The actual values of the strings are never on the stack,
only in the heap, so they're not going to affect it.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] what am I doing wrong with this query?

2006-01-20 Thread Tony Caduto
Never mind, I figured it out, I had to use ANY instead of IN, works fine 
now.


Thanks,

Tony

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] sequences not restoring properly

2006-01-20 Thread Brian Dimeler
I need to amend what I said before; apparently, the 8.1.1 dump is creating correct sequence values 
(except in one instance), but it's changing the names to match the tables and columns they go 
with. Which is nice, I suppose, except that I use an object-relational mapping API that requires 
hard-coding of sequence names. Oh well... will the new version Postgres now prevent me from creating 
sequences by hand and associating them with tables, or renaming them, or renaming sequence columns? 
I sure hope so! Because if not, dumps should reflect any changes I've been able to make.


As for how the changes were made at first, to be honest, I don't remember. The vast majority of my 
tables were created with SERIAL columns initially and they retain the automatically-generated 
sequences that went with them.


I think what may have happened is that for a few tables, I decided to change the name of the serial 
column in question shortly after creating it; that's probably the case with the 'dbbandcode' example 
I posted.


In another case, I believe I had created an entirely new table ('items' and 'itemid'), but kept the 
original sequence from a previous table ('garments_garmentid_seq'), thinking I was going to use them 
in tandem, generating numbers for each that would not overlap. Unfortunately I had inadvertently 
left an auto-generated, but unused, items_itemid_seq in there too, so when 8.1.1 saw that it must 
have chucked garments_garmentid_seq and its value in favor of the one that appeared to match the 
table and column. The other tables had their values restored correctly, it's just that their *names* 
are now a little *too* 'correct'.


Brian

Joshua D. Drake wrote:


Brian Dimeler wrote:

Doing it that way doesn't produce any errors, but it still produces 
incorrect sequence names and values.


Are these sequences that you created by hand and then associated with a 
column? Versus using serial/bigserial types?


Sincerely,

Joshua D. Drake



Doug McNaught wrote:


Brian Dimeler [EMAIL PROTECTED] writes:



I'm trying to transition a database from one server to another, the
old one running Postgres 7.4.1 and the new, 8.1.1. When I try to
restore using a pg_dump plaintext sql file from a nightly backup via
the usual

psql thedb  backup.sql




The best way to do this is to run the 8.1.1 pg_dump against the 7.4.1
server.  The new pg_dump will know better how to create a backup that
8.1.1 will like.

-Doug



---(end of broadcast)---
TIP 6: explain analyze is your friend






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Page-Level Encryption

2006-01-20 Thread David Blewett
I'm not sure if this is the right list for this message; if it's not, 
let me know and I'll take it up elsewhere. I found this thread today:

http://groups.google.com/group/comp.databases.postgresql.hackers/browse_thread/thread/4587283b3b3a5aec

I would be very interested if it's possible to encrypt data in 
Postgres, at a lower level than individual columns but not as low as 
the filesystem. I.e., either be able to encrypt a single database or a 
single table but still be able to use normal SQL against it.


I'm designing an IMAP server that will be using Peter Gutmann's 
Cryptlib to encrypt the message bodies using different keys for each 
user, and storing it as a binary large object in Postgres. However, I 
still would like to do full-text indexing of the mail. I would index 
the message, then encrypt it and store it in the database. This leaves 
the fulltext index open to attack, however. While the complete message 
would probably not be reproducible (someone correct me?), a significant 
portion of it probably could.


Having the table containing the index, or the database object, 
encrypted would protect against system admins, or admins of the 
postgres installation snooping through the table. Ideally, you would 
specify a passphrase on startup of the daemon to allow it to initialize 
that database. This would protect the data from access while the 
database was shutdown, but the system is still running. Or, it could be 
tied to the user accounts in Postgres.


For example, in my server I'm going to implement it so that when the 
user is created, a public/private key pair is generated with their 
passphrase. Then when a message is received for them, encrypt it with 
their public key. When they log in, their passphrase unlocks their 
private key enabling the server to decrypt their messages and send them 
along. Maybe Postgres users could be modified to act similarly: any 
objects the user creates get encrypted with their public key, and only 
when they log in can they be decrypted.


Anyway, I would like some discussion about the possibilites of adding 
this to Postgres.


David



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Scott Marlowe
On Fri, 2006-01-20 at 14:24, David Blewett wrote:
 I'm not sure if this is the right list for this message; if it's not, 
 let me know and I'll take it up elsewhere. I found this thread today:
 http://groups.google.com/group/comp.databases.postgresql.hackers/browse_thread/thread/4587283b3b3a5aec
 
 I would be very interested if it's possible to encrypt data in 
 Postgres, at a lower level than individual columns but not as low as 
 the filesystem. I.e., either be able to encrypt a single database or a 
 single table but still be able to use normal SQL against it.
 
 I'm designing an IMAP server that will be using Peter Gutmann's 
 Cryptlib to encrypt the message bodies using different keys for each 
 user, and storing it as a binary large object in Postgres. However, I 
 still would like to do full-text indexing of the mail. I would index 
 the message, then encrypt it and store it in the database. This leaves 
 the fulltext index open to attack, however. While the complete message 
 would probably not be reproducible (someone correct me?), a significant 
 portion of it probably could.
 
 Having the table containing the index, or the database object, 
 encrypted would protect against system admins,

IF they've got root, and the unencrypted data or the password / key is
on the machine or in memory on it, you've lost.  It may make it harder
for them to get it, but they can.  

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Joshua D. Drake



IF they've got root, and the unencrypted data or the password / key is
on the machine or in memory on it, you've lost.  It may make it harder
for them to get it, but they can.  
  

This is true but in answer to your question you can use something like
cryptfs. Note that you will loose performance.

Joshua D. Drake


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
  



--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread David Blewett

Quoting Scott Marlowe [EMAIL PROTECTED]:


Having the table containing the index, or the database object,
encrypted would protect against system admins,


IF they've got root, and the unencrypted data or the password / key is
on the machine or in memory on it, you've lost.  It may make it harder
for them to get it, but they can.



The password shouldn't be saved anywhere, it should be entered manually 
when the application starts. Or, only store it on secure removable 
media. But it would be better than the options that exist today. You're 
right; there is no perfect security, especially when the box has been 
rooted. They would have to get root while the machine is powered on, 
the database engine is running, and the user was authenticated and 
logged in. It might be possible to implement a kill switch, where 
upon receipt of a signal the user would be logged out and the memory 
scrubbed of the private key data.


David



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread David Blewett

Quoting Joshua D. Drake [EMAIL PROTECTED]:




IF they've got root, and the unencrypted data or the password / key is
on the machine or in memory on it, you've lost.  It may make it harder
for them to get it, but they can.

This is true but in answer to your question you can use something like
cryptfs. Note that you will loose performance.

Joshua D. Drake



I'm looking for something that runs *inside* of Postgres, at a higher 
level than a loop-back encrypted volume. This way, it would only be 
available when the database engine was running, and ideally only 
accessible to an authenticated/logged in user.


David



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Joshua D. Drake



This is true but in answer to your question you can use something like
cryptfs. Note that you will loose performance.

Joshua D. Drake



I'm looking for something that runs *inside* of Postgres, at a higher 
level than a loop-back encrypted volume. This way, it would only be 
available when the database engine was running, and ideally only 
accessible to an authenticated/logged in user.
Nothing that I know of that would work without custom development. There 
are of course plenty of libraries.


Sincerely,

Joshua D. Drake



David



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread David Blewett

Quoting Joshua D. Drake [EMAIL PROTECTED]:




This is true but in answer to your question you can use something like
cryptfs. Note that you will loose performance.

Joshua D. Drake



I'm looking for something that runs *inside* of Postgres, at a 
higher level than a loop-back encrypted volume. This way, it would 
only be available when the database engine was running, and ideally 
only accessible to an authenticated/logged in user.
Nothing that I know of that would work without custom development. 
There are of course plenty of libraries.


Sincerely,

Joshua D. Drake



Just as an aside, this paper is rather interesting:
http://people.csail.mit.edu/akiezun/encrypted-search-report.pdf

Instead of needing the table to be decrypted for ordinary access via 
SQL, they suggest a new method of indexing encrypted material. In it, 
they talk about a system (Mofdet) they've developed, but I was unable 
to locate anything online regarding it.


David




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Scott Marlowe
On Fri, 2006-01-20 at 14:47, David Blewett wrote:
 Quoting Scott Marlowe [EMAIL PROTECTED]:
 
  Having the table containing the index, or the database object,
  encrypted would protect against system admins,
 
  IF they've got root, and the unencrypted data or the password / key is
  on the machine or in memory on it, you've lost.  It may make it harder
  for them to get it, but they can.
 
 
 The password shouldn't be saved anywhere, it should be entered manually 
 when the application starts. Or, only store it on secure removable 
 media.

But it will be in memory, and root can get anything out of memory that
they want to.

The only way to keep root on this box from getting it is to put the key
on another box that the administrator doesn't have access to.  I.e. on
the client side of postgresql.  If you want to keep root out of your
data on your server, you encrypt it before you give it to the server.

Then, you can set up a different box with the keys on it that encrypts /
decrypts the data there.  

Now, if you can trust root, then you can do the encryption / decryption
on the same box as postgresql.  

Now, having to enter a password by hand onto that box will keep the data
secure should the box itself be stolen.

I'd suggest defining the threat model well first, then defining the
encryption you'll need based on that.

 But it would be better than the options that exist today.

There is secure, and there are levels of insecure.  Changing which level
of insecure you have doesn't give you secure.

 You're 
 right; there is no perfect security, especially when the box has been 
 rooted. They would have to get root while the machine is powered on, 
 the database engine is running, and the user was authenticated and 
 logged in.

True.  If you trust the REAL sysadmins, but are afraid of the box being
stolen, then your methodology is secure.  Because you can trust the real
admins.

If you don't trust your real sysadmins on that box, then you need to do
your encryption elsewhere.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Scott Marlowe
On Fri, 2006-01-20 at 14:58, David Blewett wrote:
 Quoting Joshua D. Drake [EMAIL PROTECTED]:
 
 
  This is true but in answer to your question you can use something like
  cryptfs. Note that you will loose performance.
 
  Joshua D. Drake
 
 
  I'm looking for something that runs *inside* of Postgres, at a 
  higher level than a loop-back encrypted volume. This way, it would 
  only be available when the database engine was running, and ideally 
  only accessible to an authenticated/logged in user.
  Nothing that I know of that would work without custom development. 
  There are of course plenty of libraries.
 
  Sincerely,
 
  Joshua D. Drake
 
 
 Just as an aside, this paper is rather interesting:
 http://people.csail.mit.edu/akiezun/encrypted-search-report.pdf
 
 Instead of needing the table to be decrypted for ordinary access via 
 SQL, they suggest a new method of indexing encrypted material. In it, 
 they talk about a system (Mofdet) they've developed, but I was unable 
 to locate anything online regarding it.

Please note that there's an awful lot of snake oil for sale in the
encryption market.  Not saying that's what this is, I haven't read it. 
And mit tends to be pretty spot on, so I'm not talking about this
particular encryption program.  just be careful about unsubstantiated
claims, as there are plenty of systems that are little more than three
card monty games out there.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread David Blewett

Quoting Scott Marlowe [EMAIL PROTECTED]:


On Fri, 2006-01-20 at 14:47, David Blewett wrote:

Quoting Scott Marlowe [EMAIL PROTECTED]:

 Having the table containing the index, or the database object,
 encrypted would protect against system admins,

 IF they've got root, and the unencrypted data or the password / key is
 on the machine or in memory on it, you've lost.  It may make it harder
 for them to get it, but they can.


The password shouldn't be saved anywhere, it should be entered manually
when the application starts. Or, only store it on secure removable
media.


But it will be in memory, and root can get anything out of memory that
they want to.

The only way to keep root on this box from getting it is to put the key
on another box that the administrator doesn't have access to.  I.e. on
the client side of postgresql.  If you want to keep root out of your
data on your server, you encrypt it before you give it to the server.

Then, you can set up a different box with the keys on it that encrypts /
decrypts the data there.



In reading the documentation of Peter Gutmann's Cryptlib, I came across 
this section:
The use of crypto devices can also complicate key management, since 
keys generated or loaded into the device usually can't be extracted 
again afterwards. This is a security feature that makes external access 
to the key impossible, and works in the same way as cryptlib's own 
storing of keys inside it's security perimeter. This means that if you 
have a crypto device that supports (say) DES and RSA encryption, then 
to export an encrypted DES key from a context stored in the device, you 
need to use an RSA context also stored inside the device, since a 
context located outside the device won't have access to the DES 
context's key.


I'm not familiar with how his library protects keys, but this suggests 
that it would be possible to use it as a basis for transparent 
encryption. He later writes, There can be a significant difference 
between theoretical and effective security. In theory, we should all be 
using smart cards and PKI for authentication. However, these measures 
are so painful to deploy and use that they're almost never employed, 
making them far less effectively secure than basic usernames and 
passwords. Security experts tend to focus exclusively on the measures 
that provide the best (theoretical) security, however sometimes these 
measures provide very little effective security because they end up 
being misused, or turned off, or bypassed.


David



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Doug McNaught
David Blewett [EMAIL PROTECTED] writes:

 In reading the documentation of Peter Gutmann's Cryptlib, I came
 across this section:
 The use of crypto devices can also complicate key management, since
 keys generated or loaded into the device usually can't be extracted
 again afterwards. This is a security feature that makes external
 access to the key impossible, and works in the same way as cryptlib's
 own storing of keys inside it's security perimeter. This means that if
 you have a crypto device that supports (say) DES and RSA encryption,
 then to export an encrypted DES key from a context stored in the
 device, you need to use an RSA context also stored inside the device,
 since a context located outside the device won't have access to the
 DES context's key.

 I'm not familiar with how his library protects keys, but this suggests
 that it would be possible to use it as a basis for transparent
 encryption.

He's talking about hardware crypto devices, which most systems don't
have (though they're certainly available).  If you don't have one of
those, then the key has to be stored in system memory.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Bruce Momjian
Doug McNaught wrote:
 David Blewett [EMAIL PROTECTED] writes:
 
  In reading the documentation of Peter Gutmann's Cryptlib, I came
  across this section:
  The use of crypto devices can also complicate key management, since
  keys generated or loaded into the device usually can't be extracted
  again afterwards. This is a security feature that makes external
  access to the key impossible, and works in the same way as cryptlib's
  own storing of keys inside it's security perimeter. This means that if
  you have a crypto device that supports (say) DES and RSA encryption,
  then to export an encrypted DES key from a context stored in the
  device, you need to use an RSA context also stored inside the device,
  since a context located outside the device won't have access to the
  DES context's key.
 
  I'm not familiar with how his library protects keys, but this suggests
  that it would be possible to use it as a basis for transparent
  encryption.
 
 He's talking about hardware crypto devices, which most systems don't
 have (though they're certainly available).  If you don't have one of
 those, then the key has to be stored in system memory.

FYI, we do have a general encryption documentation section:

http://www.postgresql.org/docs/8.1/static/encryption-options.html


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] panic on 7.3

2006-01-20 Thread Rick Gigger

pgsql-general@postgresql.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] panic on 7.3

2006-01-20 Thread Rick Gigger

I got this message:

2006-01-20 11:50:51 PANIC:  creation of file /var/lib/pgsql/data/ 
pg_clog/0292 failed: File exists


In 7.3.  It caused the server to restart.

Can anyone tell me what it means?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Jim C. Nasby
I would highly recommend taking a look at how Oracle is handling
encryption in the database in 10.2 (or whatever they're calling it).
They've done a good job of thinking out how to handle things like
managing the keys.

I know that Oracle magazine did an article on it recently; you should be
able to find that online somewhere.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Bricklen Anderson

Jim C. Nasby wrote:

I would highly recommend taking a look at how Oracle is handling
encryption in the database in 10.2 (or whatever they're calling it).
They've done a good job of thinking out how to handle things like
managing the keys.

I know that Oracle magazine did an article on it recently; you should be
able to find that online somewhere.


This link?
http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Strange errors after some DB problems

2006-01-20 Thread Dominic J. Eidson

Earlier today we experienced some problems with one of our PG
installations - running 8.0.3.

It started with the DB's write performance being fairly slow (this is how
we noticed it), and after some research, I was seeeing severeal of the
backend processes growing in their memory usage, to someplace around 4-6GB
RSS. (Machine has 8GB +1GB swap). So then they would swap-thrash until the
kernel killed off a process, at which point I'd be able to issue a pg_ctl
shutdown.

Looking in the logs after we got the machine back to where it's
responsive, I saw the following errors in the log (these are all from
today):

ERROR:  relation with OID 97737136 does not exist
CONTEXT:  SQL statement INSERT INTO _netadmin.sl_log_1 (log_origin,
log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) VALUES (1,
$1, $2, nextval('_netadmin.sl_action_seq'), $3, $4);
ERROR:  xlog flush request 33/553D66E0 is not satisfied --- flushed only
to 32/FDECF4D8
CONTEXT:  writing block 4945 of relation 1663/17230/96228095
ERROR:  xlog flush request 33/553D66E0 is not satisfied --- flushed only
to 32/FDECF4D8
CONTEXT:  writing block 4945 of relation 1663/17230/96228095
WARNING:  could not write block 4945 of 1663/17230/96228095
DETAIL:  Multiple failures --- write error may be permanent.

.. these occur several times - the first one seems to occur ever since we
enabled slony-1 on some replication sets on the server. (_netadmin.sl* is
slony stuff). The latter error, I'm not sure what would cause it.

At one point the following errors show up:

ERROR:  could not open segment 1 of relation 1663/17230/96242110 (target
block 61997056): No such file or directory
ERROR:  could not open segment 1 of relation 1663/17230/96242110 (target
block 61997056): No such file or directory
ERROR:  could not open segment 1 of relation 1663/17230/96242110 (target
block 775304242): No such file or directory
ERROR:  could not open segment 1 of relation 1663/17230/96242110 (target
block 1680881205): No such file or directory
ERROR:  could not open segment 1 of relation 1663/17230/96242110 (target
block 1680881205): No such file or directory

.. several more lines, with different target block numbers

At one poin, when trying to run a vacuum on one of the tables, we got the
following errors:

2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 4947 is
uninitialized --- fixing
2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 4948 is
uninitialized --- fixing
2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 4949 is
uninitialized --- fixing
2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 4951 is
uninitialized --- fixing
2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 4952 is
uninitialized --- fixing

... keeps going 

2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 11959
is uninitialized --- fixing
2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 11992
is uninitialized --- fixing
2006-01-20 13:06:01 CST [local] WARNING:  relation inv_node page 12118
is uninitialized --- fixing
2006-01-20 13:06:04 CST [local] ERROR:  failed to re-find parent key in
inv_node_node_mac_key

(inv_node_node_mac_key is the primary index on the inv_node table.)

When looking closer at the table (and some other tables), we found that
despite having UNIQUE indices on the tables, several of them had duplicate
keys for the index field.

We are currently in the process of cleaning up after the mess, but since
this is a production system, we want to try to find out what happened.

Several people online had mentioned either being out of disk space, or
drive problems - the DB is on a 300GB partition, using barely 10GB of disk
space - and the server doesn't show any indications of there being
hardware problems...

I can provide you with the full log (616K, ~13k lines) upon request.


 - d.

-- 
Dominic J. Eidson
Baruk Khazad! Khazad ai-menu! - Gimli
---
   http://www.the-infinite.org/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Marko Kreen
On 1/21/06, Bricklen Anderson [EMAIL PROTECTED] wrote:
 Jim C. Nasby wrote:
  I would highly recommend taking a look at how Oracle is handling
  encryption in the database in 10.2 (or whatever they're calling it).
  They've done a good job of thinking out how to handle things like
  managing the keys.
 
  I know that Oracle magazine did an article on it recently; you should be
  able to find that online somewhere.

 This link?
 http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html

Two points about it:

1) Their threat model is very clear - someone gets the backup.
2) They have focused on usbility from inside the database.

Thats all good, but IMHO such threat is more profitable to solve
by simply feeding pg_dump output to GnuPG.  This has one important
advantage over Oracle solution - no secret key is needed for
regular operation.  It is only needed for restore operation.

--
marko

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] standard normal cumulative distribution function

2006-01-20 Thread SunWuKung

Does somebody have/know of a function for pg returning the standard 
normal cumulative distribution for a Z score?

Thanks.
Balazs

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Chris Browne
[EMAIL PROTECTED] (Joshua D. Drake) writes:
 IF they've got root, and the unencrypted data or the password / key is
 on the machine or in memory on it, you've lost.  It may make it harder
 for them to get it, but they can.

 This is true but in answer to your question you can use something like
 cryptfs. Note that you will loose performance.

cryptfs doesn't forcibly help, because Someone Nefarious who has root
can connect to the box, and get access to the unencrypted mount point
that the postmaster is connected to.
-- 
let name=cbbrowne and tld=acm.org in String.concat @ [name;tld];;
http://cbbrowne.com/info/spreadsheets.html
When you awake, you will remember nothing of what I have told you.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Marko Kreen
On 1/20/06, David Blewett [EMAIL PROTECTED] wrote:
 I'm not sure if this is the right list for this message; if it's not,
 let me know and I'll take it up elsewhere. I found this thread today:
 http://groups.google.com/group/comp.databases.postgresql.hackers/browse_thread/thread/4587283b3b3a5aec

 I would be very interested if it's possible to encrypt data in
 Postgres, at a lower level than individual columns but not as low as
 the filesystem. I.e., either be able to encrypt a single database or a
 single table but still be able to use normal SQL against it.

 I'm designing an IMAP server that will be using Peter Gutmann's
 Cryptlib to encrypt the message bodies using different keys for each
 user, and storing it as a binary large object in Postgres. However, I
 still would like to do full-text indexing of the mail. I would index
 the message, then encrypt it and store it in the database. This leaves
 the fulltext index open to attack, however. While the complete message
 would probably not be reproducible (someone correct me?), a significant
 portion of it probably could.

First two general points:
- If your threat model includes database superusers and machine root,
  forget server-side encryption.  You need to encrypt at the client
  side or get a trusted box.
- If you solution goes into direction of using one key over a whole
  table, use cryptoloop or similar.

Now your concrete proposal:
- Why giving restrictive permissions and using views where user
  can see only own data, does not work for you?
- Full text index is going to be pain - you need to restrict users
  from seeing full table.

Ah, one more:
- Page-level and per-user do not mix, you need to make up your mind.

 Having the table containing the index, or the database object,
 encrypted would protect against system admins, or admins of the
 postgres installation snooping through the table. Ideally, you would
 specify a passphrase on startup of the daemon to allow it to initialize
 that database. This would protect the data from access while the
 database was shutdown, but the system is still running. Or, it could be
 tied to the user accounts in Postgres.

Don't give admin rights to untrusted people.

 For example, in my server I'm going to implement it so that when the
 user is created, a public/private key pair is generated with their
 passphrase. Then when a message is received for them, encrypt it with
 their public key. When they log in, their passphrase unlocks their
 private key enabling the server to decrypt their messages and send them
 along. Maybe Postgres users could be modified to act similarly: any
 objects the user creates get encrypted with their public key, and only
 when they log in can they be decrypted.

 Anyway, I would like some discussion about the possibilites of adding
 this to Postgres.

Well, starting from 8.1, contrib/pgcrypto does public-private key
encryption, including password-protected private keys (OpenPGP).
No keygen though, so you need to create keys externally.

You could build something on it.

--
marko

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Creation of tsearch2 index is very slow

2006-01-20 Thread Tom Lane
Stephan Vollmer [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 However, I'm not sure that anyone's tried to do any performance
 optimization on the GIST insert code ... there might be some low-hanging
 fruit there.

 Unfortunately, I'm not able to investigate it further myself as I'm
 quite a Postgres newbie. But I could provide someone else with the
 example table. Maybe someone else could find out why it is so slow.

The problem seems to be mostly tsearch2's fault rather than the general
GIST code.  I've applied a partial fix to 8.1 and HEAD branches, which
you can find here if you're in a hurry for it:
http://archives.postgresql.org/pgsql-committers/2006-01/msg00283.php
(the gistidx.c change is all you need for tsearch2)

There is some followup discussion in the pgsql-performance list.  It
seems possible that we can get another factor of 10 or better with a
smarter picksplit algorithm --- but that patch will probably be too
large to be considered for back-patching into the stable branches.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Page-Level Encryption

2006-01-20 Thread Jim C. Nasby
On Fri, Jan 20, 2006 at 02:06:18PM -0800, Bricklen Anderson wrote:
 Jim C. Nasby wrote:
 I would highly recommend taking a look at how Oracle is handling
 encryption in the database in 10.2 (or whatever they're calling it).
 They've done a good job of thinking out how to handle things like
 managing the keys.
 
 I know that Oracle magazine did an article on it recently; you should be
 able to find that online somewhere.
 
 This link?
 http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html

Yup.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] standard normal cumulative distribution function

2006-01-20 Thread Michael Fuhr
On Sat, Jan 21, 2006 at 12:00:49AM +0100, SunWuKung wrote:
 Does somebody have/know of a function for pg returning the standard 
 normal cumulative distribution for a Z score?

Are you looking for something like this?

test= SELECT z, cdf_ugaussian_p(z) FROM generate_series(-3, 3) AS g(z);
 z  |   cdf_ugaussian_p   
+-
 -3 | 0.00134989803163009
 -2 |  0.0227501319481792
 -1 |   0.158655253931457
  0 | 0.5
  1 |   0.841344746068543
  2 |   0.977249868051821
  3 |0.99865010196837
(7 rows)

cdf_ugaussian_p() is just a little wrapper I put around the GNU
Scientific Library's gsl_cdf_ugaussian_P() function.  I can post
an example of how to do that if it's what you're looking for.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] standard normal cumulative distribution function

2006-01-20 Thread SunWuKung
This is exactly what I was looking for, could you post that please.
thx
B.

 On Sat, Jan 21, 2006 at 12:00:49AM +0100, SunWuKung wrote:
  Does somebody have/know of a function for pg returning the standard 
  normal cumulative distribution for a Z score?
 
 Are you looking for something like this?
 
 test= SELECT z, cdf_ugaussian_p(z) FROM generate_series(-3, 3) AS g(z);
  z  |   cdf_ugaussian_p   
 +-
  -3 | 0.00134989803163009
  -2 |  0.0227501319481792
  -1 |   0.158655253931457
   0 | 0.5
   1 |   0.841344746068543
   2 |   0.977249868051821
   3 |0.99865010196837
 (7 rows)
 
 cdf_ugaussian_p() is just a little wrapper I put around the GNU
 Scientific Library's gsl_cdf_ugaussian_P() function.  I can post
 an example of how to do that if it's what you're looking for.
 
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How to fetch rows with multiple values

2006-01-20 Thread Keary Suska
on 1/20/06 9:08 AM, [EMAIL PROTECTED] purportedly said:

 No, because I need AND operator between the terms.
 
 Thanks anyway :)

Got it. Being thick. Just so I can save face, it may be more efficient to
do:

SELECT (min(ID) = avg(ID)) AS result, min(ID) as ID FROM
customer_mapping WHERE Name='john' or Name='peter'

This only works for one set, but you can chain them in various ways to get
multiple results. Your app would have to check the result though.

 On 1/20/06, Keary Suska [EMAIL PROTECTED] wrote:
 
 on 1/20/06 6:19 AM, [EMAIL PROTECTED] purportedly said:
 
 I have a table like this:
 
 CREATE TABLE customer_mapping
 (
 Name varchar(128) NOT NULL,
 ID int8 NOT NULL
 )
 
 Data looks something like this:
 
 john 1
 peter1
 test  2
 george  3
 
 What I would like is to write a query where I can specify multiple names
 and
 get the IDs which have them.
 
 For now it seems the most efficient way is to use INTERSECT statement:
 
 SELECT ID from customer_mapping WHERE Name='john'
 INTERSECT
 SELECT ID from customer_mapping WHERE Name='peter'
 
 Although, I don't know how exactly to use ORDER, OFFSET and LIMIT in
 this
 case...
 
 Anyway, is there any better way of doing this? (I can't change the table
 structure.)
 
 Maybe I'm a little thick this morning but can't you just do:
 
 SELECT ID from customer_mapping WHERE Name='john' OR Name='peter' OR
 Name='george' ORDER BY ID DESC


Keary Suska
Esoteritech, Inc.
Demystifying technology for your home or business


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] standard normal cumulative distribution function

2006-01-20 Thread Michael Fuhr
On Sat, Jan 21, 2006 at 01:01:28AM +0100, SunWuKung wrote:
 This is exactly what I was looking for, could you post that please.

The instructions that follow are for building a function written
in C on Unix-like systems; if you're on another platform like Windows
then I don't know what steps you'll have to follow.  The instructions
also rely on the PGXS build infrastructure available in PostgreSQL
8.0 and later, although the Makefile should be trivial to modify
to work with 7.4 and earlier.  If you have any trouble building or
installing the code then you might wish to read C-Language Functions
and especially Compiling and Linking Dynamically-Loaded Functions
and Extension Building Infrastructure in the documentation.

http://www.postgresql.org/docs/8.1/interactive/xfunc-c.html
http://www.postgresql.org/docs/8.1/interactive/xfunc-c.html#DFUNC
http://www.postgresql.org/docs/8.1/interactive/xfunc-c.html#XFUNC-C-PGXS

1. Install the GNU Scientific Library (GSL), either from a package
or by building it from source.

http://www.gnu.org/software/gsl/

We'll wrap a single GSL function, gsl_cdf_ugaussian_P(), and call
it cdf_ugaussian_p().  If you want to call it something else then
substitute your own name where I've written cdf_ugaussian_p.  This
example should also be useful as a template if you want to create
a PostgreSQL interface to other functions.

2. Create a directory for the code we're going to build; let's call
it pg_gsl.

mkdir pg_gsl
cd pg_gsl

3. Put the following C code in a file named pg_gsl.c:

#include postgres.h
#include fmgr.h
#include gsl/gsl_cdf.h

Datum  cdf_ugaussian_p(PG_FUNCTION_ARGS);

PG_FUNCTION_INFO_V1(cdf_ugaussian_p);

Datum
cdf_ugaussian_p(PG_FUNCTION_ARGS)
{
PG_RETURN_FLOAT8(gsl_cdf_ugaussian_P(PG_GETARG_FLOAT8(0)));
}

4. Put the following SQL in a file named pg_gsl.sql.in (the build
process will create a file named pg_gsl.sql, substituting MODULE_PATHNAME
with an appropriate value):

CREATE OR REPLACE FUNCTION cdf_ugaussian_p(double precision)
RETURNS double precision
AS 'MODULE_PATHNAME' LANGUAGE C IMMUTABLE STRICT;

5. Put the following in a file named Makefile:

MODULE_big  = pg_gsl
OBJS= pg_gsl.o
DATA_built  = pg_gsl.sql

PG_CPPFLAGS = `gsl-config --cflags`
SHLIB_LINK  = `gsl-config --libs`

PGXS := $(shell pg_config --pgxs)
include $(PGXS)

6. Build the code by running make.  This needs to be GNU Make,
which on some systems is gmake.  If you have any trouble with
this step then read the documentation mentioned above; if you still
can't get it to work then please post the exact output from this
command.

gmake

The result should be a shared object with a name like libpg_gsl.so
(possibly with a different suffix, and there might be another file
with a similar name like libpg_gsl.so.0).

7. Install the code so PostgreSQL can find it.  You might need to
become root or another user to do this if you don't have permission
to write to the PostgreSQL directories.

gmake install

8. Create the function in your database.  Since this is a C function
you'll need to do this as a database superuser.

psql -U postgres -d dbname -f pg_gsl.sql

9. Test the function:

psql dbname

dbname= select cdf_ugaussian_p(0);
 cdf_ugaussian_p 
-
 0.5
(1 row)

dbname= select cdf_ugaussian_p(1);
  cdf_ugaussian_p  
---
 0.841344746068543
(1 row)

dbname= select cdf_ugaussian_p(-1);
  cdf_ugaussian_p  
---
 0.158655253931457
(1 row)

That's it.  If you have trouble then please post what step you were
at, exactly what command you ran, the exact output you got, what
platform you're on, and what version of PostgreSQL you're using.

Now that I've written all this I suppose I could turn it into a
PgFoundry project.  Would there be any interest in a PostgreSQL
interface to the GNU Scientific Library?  Or has somebody already
done that and I simply overlooked it?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Strange error

2006-01-20 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 ERROR:  could not access status of transaction 0
 DETAIL:  could not create file /postgresdata/pg_subtrans/04E7: Die
 Datei existiert bereits

 The german text means the file exists already.

I think we've finally identified the reason for this:
http://archives.postgresql.org/pgsql-committers/2006-01/msg00287.php

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly