Re: [GENERAL] When to encrypt

2004-12-06 Thread gnari
From: Derek Fountain [EMAIL PROTECTED]

 [snip discussion about encrypting data]

 Indeed, but I'm still interested in the general answer. The server I have
been
 looking at was hopelessly insecure and SQL injection is only one of its
 problems. There were several other ways in! Assume, for example, an
attacker
 can write his own script directly into the website document tree. In this
 case prepared queries don't help protect what's in the database. The
attacker
 can use them himself if he likes!

For encrypted data to be usable by the website, the keys must be available
by, either in the database or in the scripts themselves. If the attacker
can write his own scripts into the document tree, these keys will be
available to him as well.

gnari




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Older Windows versions

2004-12-06 Thread Richard Huxton
Leonardo Mateo wrote:
Hi, I need an older version of PostgreSQL for windows since I,
unfortunatelly, have to develope a Win32 application with Qt-Non
Commercial, and the available drivers for Postgre are for Versions 6.x
and 7.x.
I went to the downloads section on postgresql.org but I couldn't find
anything but the 8.x.x versions for Windows.
Is there any wway to get the older versions for Window$?
Only via cygwin - there are no native versions before 8.0
Have you checked whether you can use the 7.x drivers against 8.x?
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Index bloat in 7.2

2004-12-06 Thread Julian Scarfe
From: Christopher Browne [EMAIL PROTECTED]

 The empty pages not reclaimed problem is something that did indeed
 get fixed in the post-7.2 days.  I _think_ it was 7.4, but it might
 have been 7.3.

 In short, 7.4.x is indeed a good resolution to your issue.

From: Tom Lane [EMAIL PROTECTED]

 That's exactly what I'd expect ...

Thanks both.

So it sounds like:

a) the issue is controllable with a regular (and in our case, just
occasional) reindex without any long term negative consequences

b) Only a dump-restore major version upgrade (which we'll do next time we
can take the system out for long enough) will avoid the issue.

Julian



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread peter pilsl
Is there an easy solution for this?
I'd like to select a single entry from a table and the entries that 
would be previous and next given to a certain order.

like
select id from mytable where id=45 order by name,name2;
and then I'd like to select the two entries that would come before and 
after according to the order name,name2;
id is not ordered, but there is an index on  (name,name2) so the needed 
infomation about previous, next should be stored somewhere in this index.

My current solution is to read all the data without the WHERE-clause and 
then fetch the needed ones, which is quite time-demanding.

thnx,
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] When to encrypt

2004-12-06 Thread Dominic Mitchell
On Sun, Dec 05, 2004 at 11:31:34PM -0500, Greg Stark wrote:
 Derek Fountain [EMAIL PROTECTED] writes:
  If another SQL Injection vulnerability turns up (which it might, given the 
  state of the website code), 
 
 You will never see another SQL injection vulnerability if you simply switch to
 always using prepared queries and placeholders. Make it a rule that you
 _never_ interpolate variables into the query string. period. No manual quoting
 to get right, no subtle security audit necessary: If the SQL query isn't a
 constant string you reject it.

Another good piece of defense is mod_security (assuming that your web
server is Apache).  You can teach it about SQL injection attacks with a
little work.

http://www.modsecurity.org

-Dom

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Pierre-Frdric Caillaud

select id from mytable where id=45 order by name,name2;
Why do you want to select id if you already know it ?
Do you not want to specify a starting value for name and name2 ?
	I'll presume you want to select a row by its 'id' and then get the  
previous and next ones in the name, name2 order. I'll guess the id is  
UNIQUE so these two other rows won't have the same id.

	If I guessed right I have the solution, if I'm not please explain what  
you wanna do more precisely ;)

and then I'd like to select the two entries that would come before and  
after according to the order name,name2;
id is not ordered, but there is an index on  (name,name2) so the needed  
infomation about previous, next should be stored somewhere in this index.

My current solution is to read all the data without the WHERE-clause and  
then fetch the needed ones, which is quite time-demanding.

thnx,
peter


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] select single entry and its neighbours using direct-acess

2004-12-06 Thread peter pilsl
Pierre-Frédéric Caillaud wrote:

select id from mytable where id=45 order by name,name2;

Why do you want to select id if you already know it ?
Do you not want to specify a starting value for name and name2 ?
I'll presume you want to select a row by its 'id' and then get the  
previous and next ones in the name, name2 order. I'll guess the id is  
UNIQUE so these two other rows won't have the same id.

If I guessed right I have the solution, if I'm not please explain 
what  you wanna do more precisely ;)

sorry for being unclear.
but you guessed right. ID is UNIQUE and and I want to select a row by 
its ID and also get the previous and next ones in the name, name2-order.

For the selected row I need all datafields and for the next and previous 
I need only the ID (to have it referenced on the dataoutputpage for a 
certain row).

I'm very looking forward for your solution.
thnx a lot,
peter



and then I'd like to select the two entries that would come before 
and  after according to the order name,name2;
id is not ordered, but there is an index on  (name,name2) so the 
needed  infomation about previous, next should be stored somewhere in 
this index.

My current solution is to read all the data without the WHERE-clause 
and  then fetch the needed ones, which is quite time-demanding.

thnx,
peter



--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] SSL confirmation - No trusted certificate found

2004-12-06 Thread Andrew M
Ok,
spoke to quickly! I am following the example here:

http://archives.postgresql.org/pgsql-jdbc/2003-08/msg00110.php

for ssl support in postgreSQL jdbc (DriverVersion:PostgreSQL 8.0devel JDBC3 with SSL (build 308)). When I get to the final stage, copying the certificate into the java keystore:

keytool -keystore [your java home here]/lib/security/cacerts -alias [any name for the cert you like (i.e. postgres)] -import -file server.crt.der

I get the following message:

Trust this certificate? [no]:

I enter yes and get:

Certificate was added to keystore

I add ssl to my connection string:

jdbc:postgresql://localhost:5432/mydatabase?ssl

When I launch Jboss, which handles the connection to postgresql, I get the following error:

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found

What am I doing wrong here

many thanks


Andrew


On 6 Dec 2004, at 01:48, Andrew M wrote:

Hi,
seems like I may have located the solution to my earlier problem:

http://archives.postgresql.org/pgsql-jdbc/2003-08/msg00110.php

many thanks

Andrew
On 5 Dec 2004, at 23:51, Doug McNaught wrote:

Michael Fuhr [EMAIL PROTECTED]> writes:

On Sun, Dec 05, 2004 at 09:10:42PM +, Andrew M wrote:

The map i make reference to is a Jboss map used to make a jndi
connection to postgreSQL. What document do I need to access to get hold
of the envireonment variables, namely PGSSLMODE?

I don't know if the J-stuff wraps libpq or if it implements the
communications protocol on its own.

The latter.  AFAIK it doesn't use environment variables.  See the JDBC
driver docs for how to set options when connecting.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings



Re: [GENERAL] When to encrypt

2004-12-06 Thread Geoffrey
Greg Stark wrote:
Derek Fountain [EMAIL PROTECTED] writes:

If another SQL Injection vulnerability turns up (which it might,
given the state of the website code),

You will never see another SQL injection vulnerability if
Never say never..
--
Until later, Geoffrey
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [HACKERS] DBD::PgSPI 0.02

2004-12-06 Thread Mike Rylander
On Mon, 6 Dec 2004 00:27:18 -0500 (EST), [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Hello,
 
 A short note that I've updated DBD::PgSPI version 0.02 to CPAN.
 
 There are no new features - but the code now expects (and works with)
 reasonably decent versions of perl (5.8.x) and pgsql (8.x).

Just so that you have some info, I've been using DBD::PgSPI with Pg
8.0 since beta 1.  The only restriction I've run into with the old
code is that it doesn't like the DBD 'do' method.  I have to use
execute/fetchX or selectX, but other than that it seems to work.  I'll
be grabbing the update to test soon.

 
 No warranty is given, this code compiles and 'scratches my itch'. If it
 happens to scratch yours, more the merrier.
 

Thanks for scratching your itch!  I'm sure you're merrie than you know. :)

 -alex
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 


-- 
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL]

2004-12-06 Thread Konstantin Danilov
Hello List!

PostgreSQL 8 does not correctly sort words written in cyrillic. It is a bug I 
suppose :)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] SSL confirmation - No trusted certificate found

2004-12-06 Thread Kris Jurka


On Mon, 6 Dec 2004, Andrew M wrote:

 jdbc:postgresql://localhost:5432/mydatabase?ssl
 
 When I launch Jboss, which handles the connection to postgresql, I get 
 the following error:
 
 javax.net.ssl.SSLHandshakeException: 
 sun.security.validator.ValidatorException: No trusted certificate found
 

Difficult to say.  Perhaps JBoss is deciding to use an alternate 
truststore?  Perhaps it doesn't like your cert setup.  I would suggest 
first running a simple client program to test that it's working first.  
Also adding -Djavax.net.debug=ssl to the java command will help debugging 
ssl problems.

Further the 8.0 JDBC driver can create SSL connnections without doing 
authentication by adding using an additional url parameter:

sslfactory=org.postgresql.ssl.NonValidatingFactory

You should try to get it to authenticate correctly, but this is another 
useful test point.

Kris Jurka

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL]

2004-12-06 Thread Tino Wildenhain
Hi,

Am Montag, den 06.12.2004, 15:07 +0300 schrieb Konstantin Danilov:
 Hello List!
 
 PostgreSQL 8 does not correctly sort words written in cyrillic. It is a bug I 
 suppose :)

No, its not a bug ;) You have to make sure you used the correct locale
while you set up your database cluster (initdb) and also the correct 
charset. Then sorting will work.

Regards
Tino


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL]

2004-12-06 Thread Oleg Bartunov
On Mon, 6 Dec 2004, Konstantin Danilov wrote:
Hello List!
PostgreSQL 8 does not correctly sort words written in cyrillic. It is a bug I suppose :)
Probably, it's a problem of your setup.
Here is what I have:
regression=# select version();
  version 
---
 PostgreSQL 8.0.0rc1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3
(1 row)

regression=# select upper(''),''  '';
 upper | ?column? 
---+--
   | t
(1 row)


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] SSL confirmation - No trusted certificate found

2004-12-06 Thread Andrew M
Kriss,
I have implemented your earlier suggestion:
sslfactory=org.postgresql.ssl.NonValidatingFactory
and no error are generated, so I presume that the connection to the 
database is now ssl'd. So why bother going through the headache of 
creating a certificate if I can do it like mentioned above?

regards
Andrew
On 6 Dec 2004, at 12:28, Kris Jurka wrote:

On Mon, 6 Dec 2004, Andrew M wrote:
jdbc:postgresql://localhost:5432/mydatabase?ssl
When I launch Jboss, which handles the connection to postgresql, I get
the following error:
javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: No trusted certificate 
found

Difficult to say.  Perhaps JBoss is deciding to use an alternate
truststore?  Perhaps it doesn't like your cert setup.  I would suggest
first running a simple client program to test that it's working first.
Also adding -Djavax.net.debug=ssl to the java command will help 
debugging
ssl problems.

Further the 8.0 JDBC driver can create SSL connnections without doing
authentication by adding using an additional url parameter:
sslfactory=org.postgresql.ssl.NonValidatingFactory
You should try to get it to authenticate correctly, but this is another
useful test point.
Kris Jurka
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SSL confirmation - No trusted certificate found

2004-12-06 Thread Kris Jurka


On Mon, 6 Dec 2004, Andrew M wrote:

 I have implemented your earlier suggestion:
 
 sslfactory=org.postgresql.ssl.NonValidatingFactory
 
 and no error are generated, so I presume that the connection to the 
 database is now ssl'd. So why bother going through the headache of 
 creating a certificate if I can do it like mentioned above?

It leaves you open to man in the middle attacks.  You are no longer 
verifying that the server is who they say they are.

Kris Jurka


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Delete function

2004-12-06 Thread Secrétariat



Hello !

I createa delete function :
CREATE FUNCTION delalpha(varchar, integer) RETURNS 
boolean AS 'DELETE FROM public.params WHERE soc = $1 AND 
numpar = $2 ; 
SELECT TRUE ;' LANGUAGE sql ;Is there a way to 
return the number of deleted row ?
Thanks.

Luc


Re: [GENERAL] SSL confirmation - No trusted certificate found

2004-12-06 Thread Andrew M
Ok,
I have just looked at my postgreSQL terminal window and seen the 
following message:

sslv3 alert certificate unknown
Could this mean that the certificate is of the wrong type??
regards
Andrew
On 6 Dec 2004, at 12:50, Andrew M wrote:
Kriss,
I have implemented your earlier suggestion:
sslfactory=org.postgresql.ssl.NonValidatingFactory
and no error are generated, so I presume that the connection to the 
database is now ssl'd. So why bother going through the headache of 
creating a certificate if I can do it like mentioned above?

regards
Andrew
On 6 Dec 2004, at 12:28, Kris Jurka wrote:

On Mon, 6 Dec 2004, Andrew M wrote:
jdbc:postgresql://localhost:5432/mydatabase?ssl
When I launch Jboss, which handles the connection to postgresql, I 
get
the following error:

javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: No trusted certificate 
found

Difficult to say.  Perhaps JBoss is deciding to use an alternate
truststore?  Perhaps it doesn't like your cert setup.  I would suggest
first running a simple client program to test that it's working first.
Also adding -Djavax.net.debug=ssl to the java command will help 
debugging
ssl problems.

Further the 8.0 JDBC driver can create SSL connnections without doing
authentication by adding using an additional url parameter:
sslfactory=org.postgresql.ssl.NonValidatingFactory
You should try to get it to authenticate correctly, but this is 
another
useful test point.

Kris Jurka
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that 
your
  message can get through to the mailing list cleanly


---(end of 
broadcast)---
TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Pierre-Frdric Caillaud

sorry for being unclear.
but you guessed right. ID is UNIQUE and and I want to select a row by  
its ID and also get the previous and next ones in the name, name2-order.

For the selected row I need all datafields and for the next and previous  
I need only the ID (to have it referenced on the dataoutputpage for a  
certain row).

	OK, this is a lot clearer now.
	I suppose you have a UNIQUE(name,name2) or else, if you have several rows  
with the same (name,name2) you'll get one of them, but you won't know  
which one.

For example :
select * from test;
 id | name | name2
+--+---
  1 | a| a
  2 | a| b
  3 | a| c
  4 | b| a
  5 | b| b
  6 | b| c
  7 | c| a
  8 | c| b
  9 | c| c
(9 lignes)
Solution #1 :
- In you application :
SELECT * FROM test WHERE id=4;
 id | name | name2
+--+---
  4 | b| a
You then fetch name and name2 and issue the two following SELECT,  
replacing 'a' and 'b' with name2 and name1 :

SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY  
name,name2 ASC LIMIT 1;
 id | name | name2
+--+---
  5 | b| b

SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY  
name,name2 DESC LIMIT 1;
 id | name | name2
+--+---
  3 | a| c

These should use an index on (name,name2).
Solution #2 :
You could do the same in a pl/pgsql function, which will be a lot faster,  
and return three rows.

It is a pity you cannot use (name,name2)  ('a','b').


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Delete function

2004-12-06 Thread Richard Huxton
Secrétariat wrote:
Hello !
I create a delete function :
CREATE FUNCTION delalpha(varchar, integer) RETURNS boolean
AS 'DELETE FROM public.params WHERE soc = $1 AND numpar = $2 ;
  SELECT  TRUE ;'
LANGUAGE sql ;
Is there a way to return the number of deleted row ?
Thanks.
If you rewrote the function in plpgsql, you could use GET DIAGNOSTICS 
- see the manual plpgsql/Obtaining the result status.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread David Esposito
Executive summary: We just did a cutover from a RedHat 8.0 box to a RedHat
Enterprise Linux 3 box and we're seeing a lot more swapping on the new box
than we ever did on the old box ... this is killing performance ...

Background:

Old Box:
RedHat 8.0
2GB Memory
Dual PIII 600MHz
Postgres 7.3.4
SHMMAX = 1073741824 (1 GB)
shared_buffers = 65536 (roughly 0.5 GB)   
max_fsm_relations = 1000  
max_fsm_pages = 100   
vacuum_mem = 131072
Roughly 25 - 30 connections open (mostly idle) at any given time
(connection pools)   

New Box:
RedHat Enterprise Linux ES 3
2GB Memory
Dual P4 Xeon 2.7 GHz
Postgres 7.3.4
SHMMAX = 1610612736 (1.5 GB)
shared_buffers = 131072 (roughly 1GB)
max_fsm_relations = 1  
max_fsm_pages = 1000   
sort_mem = 4096
vacuum_mem = 262144
Roughly 25 - 30 connections open (mostly idle) at any given time
(connection pools)   

Both boxes are dedicated DB servers ... With the new configuration, we were
seeing swap rates of 1000-5000 KB/s (according to vmstat) ... with the old
configuration, we never saw any swapping ... I turned the shared_buffers and
sort_mem down on the new box to match the settings of the old box and found
that it reduced the swapping significantly (roughly 0-2000 KB/s) but didn't
eliminate it completely ... when looking at 'top' on the new box, the list
of postgres processes all seem to be indicating a different amount of memory
usage ... under the periods of heavy swapping, one or more of the postgres
processes would be way up there (between 500MB and 1000MB (which would
easily explain the swapping)) ... the question is: why aren't all of the
processes sharing the same pool of shared memory since I thought that's what
I'm doing when adjusting the shared_buffers property?

Here's an example of my 'top' (not under heavy load) demonstrating the
different memory usage by each postgres process ... I unfortunately don't
have the same snapshot data from the old configuration, but I seem to recall
that all of the postgres processes had 

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
16966 postgres  15   0  107M 107M  105M S 0.0  5.3   0:39   1 postmaster
20198 postgres  15   0 40448  39M 37752 S 0.2  1.9   0:07   0 postmaster
18801 postgres  15   0 21932  21M 19616 S 0.0  1.0   0:01   0 postmaster
19210 postgres  16   0 21276  20M 19008 S 0.0  1.0   0:01   0 postmaster
19507 postgres  15   0 15504  14M 13580 S 0.0  0.7   0:00   3 postmaster
20308 postgres  15   0 12832  12M 11248 S 0.0  0.6   0:00   3 postmaster
20456 postgres  15   0 12500  12M 10920 S 0.0  0.6   0:00   1 postmaster
20403 postgres  15   0 11572  11M  9928 S 0.2  0.5   0:00   1 postmaster
20251 postgres  15   0 10796  10M  9260 S 0.0  0.5   0:00   0 postmaster
20398 postgres  15   0 10792  10M  9256 S 0.0  0.5   0:00   2 postmaster
20306 postgres  21   0  9100 8808  7796 S 0.0  0.4   0:00   1 postmaster
20425 postgres  16   0  9100 8808  7796 S 0.0  0.4   0:00   0 postmaster
20360 postgres  15   0  9096 8804  7792 S 0.0  0.4   0:00   3 postmaster
20383 postgres  21   0  9096 8804  7792 S 0.0  0.4   0:00   0 postmaster
20434 postgres  21   0  9096 8804  7792 S 0.0  0.4   0:00   1 postmaster
20305 postgres  15   0  9108 8796  7804 S 0.0  0.4   0:00   2 postmaster

Can anyone think of a reason as to why I'm seeing such heavy swapping?
According to Bruce Momjian's performance tuning guide, he recommends roughly
half the amount of physical RAM for the shared_buffers ... I tried turning
UP the shared_buffers even higher (to 180,000 i believe; roughly 1.5GB) and
that seemed to make the problem even worse ... 

Thanks in advance,
Dave


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] When to encrypt

2004-12-06 Thread Daniel Martini
Hi,

Citing Derek Fountain [EMAIL PROTECTED]:
 Indeed, but I'm still interested in the general answer.

There is no general answer. Depends on how deep you get into trouble, if
the data is compromised.

 The server I have been looking at was hopelessly insecure and SQL
 injection is only one of its problems. There were several othe
 ways in! Assume, for example, an attacker
 can write his own script directly into the website document tree. In
 this case prepared queries don't help protect what's in the database.
 The attacker can use them himself if he likes!

A chain of security measures is only as strong as its weakest link.
If cryptography will help you in this case really depends very much on the
level of system access an attacker can gain and on the encryption scheme
you use.
If an attacker can gain root, it is quite probable, that your cryptographic
keys will be compromised (because he will very probably be able to read
physical memory), so cryptography will not help you at all.
If an attacker can not gain root, it depends on if you use encryption
on the file system level or on record level in the db. File system
level encryption does not help much against attacks from the network
on a running system, because the file system will very probably
be mounted, and thus readable. record level encryption might help,
depending on how it is implemented (when you implement it, ask yourself:
are keys/passwords which are floating around between database server/
web server/client app unreadable by eavesdroppers on all stages of
processing?)

 Given this type of mess, having logins, passwords, credit card info and the 
 like encrypted in the DB will add another layer of protection. The question 
 is, do people normally add this layer, just in case.

In general, I would, if there was medical/payment/other personal data in 
the db. The country I live in has quite strict regulations concerning
protection of people's private data...
(which is a good thing IMHO. Anyways just to make the point for you, that
this is more than just a technical matter ;-) Legal matters and economics
play a role here, too.)
But discussion above and conclusion below should show you, that there's a
bunch of problems elsewhere, which cannot be solved just by using
cryptography.

 or do they assume that 
 all the previous layers will do the job?

Key thing is to find the weakest layer and strengthen it. Strongest
security measure does no good, if an attacker can easily bypass it
by gaining higher level system access by breaking another (weaker)
security layer.
From your description of the problem, I would conclude, that your
client's app needs fixing elsewhere first.

Regards,
Daniel

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Andrew - Supernews
On 2004-12-06, Pierre-Frédéric Caillaud [EMAIL PROTECTED]
wrote:
 SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY  
 name,name2 ASC LIMIT 1;

Write that WHERE clause instead as:

 WHERE name='b' AND (name'b' OR (name='b' AND name2'a'))

This is logically equivalent, but it gives the planner a better handle on
how to use an index scan to satisfy the query.

 SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY  
 name,name2 DESC LIMIT 1;

That needs to be ORDER BY name DESC, name2 DESC (the direction indicator
applies per-column and not to the output ordering). Same goes for the
WHERE clause in this query as the previous one, too.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Pierre-Frdric Caillaud

SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY
name,name2 ASC LIMIT 1;
Write that WHERE clause instead as:
 WHERE name='b' AND (name'b' OR (name='b' AND name2'a'))
This is logically equivalent, but it gives the planner a better handle on
how to use an index scan to satisfy the query.
	I thought the planner had an automatic rewriter for these situations.  
It'd be interesting to see an EXPLAIN ANALYZE output to see if it's indeed  
rewritten.

SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY
name,name2 DESC LIMIT 1;
That needs to be ORDER BY name DESC, name2 DESC (the direction indicator
applies per-column and not to the output ordering). Same goes for the
WHERE clause in this query as the previous one, too.
You're right, I screwed up !
Sorry ;)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Martijn van Oosterhout
On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
   shared_buffers = 131072 (roughly 1GB)
   max_fsm_relations = 1  
   max_fsm_pages = 1000   
   sort_mem = 4096
   vacuum_mem = 262144
   Roughly 25 - 30 connections open (mostly idle) at any given time
 (connection pools)   

I'd suggest reducing shared_buffers to maybe a few thousand, there's
really no point reserving so much memory that way, it just a waste.

Secondly, up your sort_mem a bit to reflact how big your sorts are
likely to be.

How's your effective_cache_size?
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpgbm6ddNySa.pgp
Description: PGP signature


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Tom Lane
=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= [EMAIL PROTECTED] writes:
 SELECT * FROM test WHERE (name='b' and name2'a') OR (name'b') ORDER BY
 name,name2 ASC LIMIT 1;
 Write that WHERE clause instead as:
 WHERE name='b' AND (name'b' OR (name='b' AND name2'a'))
 This is logically equivalent, but it gives the planner a better handle on
 how to use an index scan to satisfy the query.

   I thought the planner had an automatic rewriter for these situations.  

No.  There was a prior discussion of this, saying that we really ought
to support the SQL-spec row comparison syntax:
... WHERE (name, name2)  ('b', 'a');
which would map directly onto the semantics of a 2-column index.  We
don't have that functionality yet, though (we take the syntax but the
semantics are not SQL-compliant) let alone any ability to pass it
through to a 2-column index.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] triggers, transactions and locks

2004-12-06 Thread C G
Dear All,
Is there a way to use locks within a trigger? My example below gives the 
error:
ERROR:  unexpected error -8 in EXECUTE of query BEGIN
CONTEXT:  PL/pgSQL function insert_into_t1 line 6 at execute statement

Thanks
Colin
CREATE FUNCTION insert_into_t1() RETURNS trigger AS'
   DECLARE
   set_qry text;
   BEGIN
   set_qry:=''BEGIN WORK'';
   execute set_qry;
   LOCK t1;
   INSERT INTO t2 VALUES (11);
   COMMIT;
   RETURN NEW;
   END;
' LANGUAGE 'plpgsql' SECURITY DEFINER;
CREATE TRIGGER insert_into_t1 BEFORE INSERT
   ON t1 FOR EACH ROW EXECUTE
   PROCEDURE insert_into_t1();

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Pierre-Frdric Caillaud

	I thought the planner had an automatic rewriter for these situations.
No.  There was a prior discussion of this, saying that we really ought
to support the SQL-spec row comparison syntax:
What I meant was that I thought the planner could rewrite :
(A and C) or (A AND B) as A and (B or C)
which is more index-friendly.
... WHERE (name, name2)  ('b', 'a');
which would map directly onto the semantics of a 2-column index.  We
don't have that functionality yet, though (we take the syntax but the
semantics are not SQL-compliant) let alone any ability to pass it
through to a 2-column index.
One could always use ARRAY[name,name2]  ARRAY['b','a']
But it is NOT index-friendly...
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Tom Lane
David Esposito [EMAIL PROTECTED] writes:
 New Box:
   shared_buffers = 131072 (roughly 1GB)

This setting is an order of magnitude too large.  There is hardly any
evidence that it's worth setting shared_buffers much above 1.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] triggers, transactions and locks

2004-12-06 Thread Stephan Szabo

On Mon, 6 Dec 2004, C G wrote:

 Is there a way to use locks within a trigger? My example below gives the
 error:

It's not the lock that's the problem I think. The begin work is failing
because you can't start a transaction inside the function. Were you trying
to release the lock at the commit in the function?

 CREATE FUNCTION insert_into_t1() RETURNS trigger AS'
 DECLARE
 set_qry text;
 BEGIN

 set_qry:=''BEGIN WORK'';
 execute set_qry;
 LOCK t1;
 INSERT INTO t2 VALUES (11);
 COMMIT;
 RETURN NEW;
 END;
 ' LANGUAGE 'plpgsql' SECURITY DEFINER;

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] triggers, transactions and locks

2004-12-06 Thread C G

 Is there a way to use locks within a trigger? My example below gives the
 error:
It's not the lock that's the problem I think. The begin work is failing
because you can't start a transaction inside the function. Were you trying
to release the lock at the commit in the function?
Yes I was using the commit to release the lock.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Index bloat in 7.2

2004-12-06 Thread Alvaro Herrera
On Mon, Dec 06, 2004 at 08:48:04AM -, Julian Scarfe wrote:

 b) Only a dump-restore major version upgrade (which we'll do next time we
 can take the system out for long enough) will avoid the issue.

Long enough could be a minutes or seconds issue if you use Slony-I,
I've heard ...  (Of course you'd still need to fix your apps, which may
take somewhat longer than that.)

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Y una voz del caos me habló y me dijo
Sonríe y sé feliz, podría ser peor.
Y sonreí. Y fui feliz.
Y fue peor.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Alvaro Herrera
On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:

 According to Bruce Momjian's performance tuning guide, he recommends roughly
 half the amount of physical RAM for the shared_buffers ...

Does he?  The guide I've seen from him AFAIR states that you should
allocate around 10% of physical RAM to shared_buffers.  And this advice
goes against common Postgres folklore.  Maybe it's a document that needs
to be updated.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
There is evil in the world. There are dark, awful things. Occasionally, we get
a glimpse of them. But there are dark corners; horrors almost impossible to
imagine... even in our worst nightmares. (Van Helsing, Dracula A.D. 1972)

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] unsubscribe

2004-12-06 Thread mjmayfield

unsubscribe



Re: [GENERAL] pgFoundary?

2004-12-06 Thread Robert Treat
Probably just an oversight, but will be linked when we roll the new
website layout with 8.0 (crosses fingers)

Robert Treat

On Fri, 2004-12-03 at 02:07, Jim C. Nasby wrote:
 Then I guess my next question is: why isn't it linked to from
 http://postgresql.org ?
 
 On Fri, Dec 03, 2004 at 12:35:23AM -0600, Thomas F.O'Connell wrote:
  http://pgfoundry.org/
  
  You had an extra 'a'. :)
  
  I'm not sure what the status of gborg is, at the moment.
  
  -tfo
  
  --
  Thomas F. O'Connell
  Co-Founder, Information Architect
  Sitening, LLC
  http://www.sitening.com/
  110 30th Avenue North, Suite 6
  Nashville, TN 37203-6320
  615-260-0005
  
  On Dec 3, 2004, at 12:28 AM, Jim C. Nasby wrote:
  
  I've seen references to pgFoundary on the mailling lists, but I can't
  seem to find it anywhere. Does it actually exist? Is it the predecessor
  to gborg?
  
  Speaking of gborg, how come it isn't accepting projects right now?
  -- 
  Jim C. Nasby, Database Consultant   [EMAIL PROTECTED]
  Give your computer some brain candy! www.distributed.net Team #1828
  
  Windows: Where do you want to go today?
  Linux: Where do you want to go tomorrow?
  FreeBSD: Are you guys coming, or what?
  
  ---(end of 
  broadcast)---
  TIP 6: Have you searched our list archives?
  
 http://archives.postgresql.org
  
  
  ---(end of broadcast)---
  TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match
  
 
 -- 
 Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
 Give your computer some brain candy! www.distributed.net Team #1828
 
 Windows: Where do you want to go today?
 Linux: Where do you want to go tomorrow?
 FreeBSD: Are you guys coming, or what?
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faqs/FAQ.html

-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Index bloat in 7.2

2004-12-06 Thread Julian Scarfe
On Mon, Dec 06, 2004 at 08:48:04AM -, Julian Scarfe wrote:
b) Only a dump-restore major version upgrade (which we'll do next 
time we
can take the system out for long enough) will avoid the issue.
On 6 Dec 2004, at 16:18, Alvaro Herrera wrote:
Long enough could be a minutes or seconds issue if you use Slony-I,
I've heard ...  (Of course you'd still need to fix your apps, which may
take somewhat longer than that.)
A good point Alvaro, but I don't think Slony-I is compatible with 7.2, 
which is the version I'm starting from.  For upgrades from 7.3 onwards, 
it's certainly a route worth considering.

Julian
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Auditing with shared username

2004-12-06 Thread Eric E
Hi all,
   Like many folks who use three-tier design, I would like to create an 
audit trail in my Postgres database, and I would like to do so without 
having to create a database user for each audit.

As I see it, there are two ways to do this, and I can't see a clear way 
to do either of them.  If anyone has better suggestions, I'd of course 
love to hear them.

Here's what I'd thought up:
1) Connect my homebrew login system which runs out of a couple database 
tables to postgres connection/sessionID (i.e., keep track of which 
sessionID represents my current user) so that any audit function can use 
the session ID to look up the current user.

2) Maintain a current homebrew user session variable that is distinct 
from Postgres' current_user, which I believe stores the current database 
user.   I found a couple threads on session variables, but mostly they 
were discouraging people from using such variables.

Does anyone have any good ideas or advice?
Also, both of these methods require that a user maintain his/her own 
session.  I don't know how PG's connection pooling works, but is it 
actually possible to specify a particular session for a particular 
user?  Is there some place I can find documentation on how Postgres 
deals with logins and sessions?

Many thanks,
Eric
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] immutable stable volatile

2004-12-06 Thread pgsql-general

hello,

I have read the documentation couple of times and I still can not figure
out
the following aspects.

if a function does insert/update/delete it needs to be stable or
volatile ?
if a immutable function executes 'nextval' should itself be also
volatile ?

thanks,
Razvan Radu
 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Auditing with shared username

2004-12-06 Thread Ian Harding
I have a homegrown userid/password system in a database table, and on
tables I audit, I keep the id of the last person to touch that record,
and have a trigger write the changed values out to an audit table.  It
works fine, but of course there is some overhead involved.

You can't involve postgres connections as representing a user since any
connection pooling system will make that useless.  PG doesn't have
connection pooling, that is a higher level application function.  



 Eric E [EMAIL PROTECTED] 12/06/04 8:58 AM 
Hi all,
Like many folks who use three-tier design, I would like to create an

audit trail in my Postgres database, and I would like to do so without 
having to create a database user for each audit.

As I see it, there are two ways to do this, and I can't see a clear way 
to do either of them.  If anyone has better suggestions, I'd of course 
love to hear them.

Here's what I'd thought up:

1) Connect my homebrew login system which runs out of a couple database 
tables to postgres connection/sessionID (i.e., keep track of which 
sessionID represents my current user) so that any audit function can use

the session ID to look up the current user.

2) Maintain a current homebrew user session variable that is distinct 
from Postgres' current_user, which I believe stores the current database

user.   I found a couple threads on session variables, but mostly they 
were discouraging people from using such variables.

Does anyone have any good ideas or advice?

Also, both of these methods require that a user maintain his/her own 
session.  I don't know how PG's connection pooling works, but is it 
actually possible to specify a particular session for a particular 
user?  Is there some place I can find documentation on how Postgres 
deals with logins and sessions?

Many thanks,

Eric

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Auditing with shared username

2004-12-06 Thread Eric E
Hi Ian,
   Thanks for the quick reply.  What I'm confused about is how I let 
the trigger function etc. know which homegrown user it was that touched 
the record.  Any advice?

Thanks,
Eric
Ian Harding wrote:
I have a homegrown userid/password system in a database table, and on
tables I audit, I keep the id of the last person to touch that record,
and have a trigger write the changed values out to an audit table.  It
works fine, but of course there is some overhead involved.
You can't involve postgres connections as representing a user since any
connection pooling system will make that useless.  PG doesn't have
connection pooling, that is a higher level application function.  


 

Eric E [EMAIL PROTECTED] 12/06/04 8:58 AM 
   

Hi all,
   Like many folks who use three-tier design, I would like to create an
audit trail in my Postgres database, and I would like to do so without 
having to create a database user for each audit.

As I see it, there are two ways to do this, and I can't see a clear way 
to do either of them.  If anyone has better suggestions, I'd of course 
love to hear them.

Here's what I'd thought up:
1) Connect my homebrew login system which runs out of a couple database 
tables to postgres connection/sessionID (i.e., keep track of which 
sessionID represents my current user) so that any audit function can use

the session ID to look up the current user.
2) Maintain a current homebrew user session variable that is distinct 
from Postgres' current_user, which I believe stores the current database

user.   I found a couple threads on session variables, but mostly they 
were discouraging people from using such variables.

Does anyone have any good ideas or advice?
Also, both of these methods require that a user maintain his/her own 
session.  I don't know how PG's connection pooling works, but is it 
actually possible to specify a particular session for a particular 
user?  Is there some place I can find documentation on how Postgres 
deals with logins and sessions?

Many thanks,
Eric
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
 joining column's datatypes do not match
 


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] immutable stable volatile

2004-12-06 Thread PostgreSQL general mail list
hello, 

I have read the documentation couple of times and I still can not figure out
the following aspects. 

if a function does insert/update/delete it needs to be stable or volatile ?
if a immutable function executes 'nextval' should itself be also volatile ? 

thanks,
Razvan Radu 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] When to encrypt

2004-12-06 Thread Jan Wieck
On 12/6/2004 1:33 AM, Derek Fountain wrote:
On Monday 06 December 2004 12:31, you wrote:
Derek Fountain [EMAIL PROTECTED] writes:
 If another SQL Injection vulnerability turns up (which it might, given
 the state of the website code),
You will never see another SQL injection vulnerability if you simply switch
to always using prepared queries and placeholders.
much wisdom snipped
Indeed, but I'm still interested in the general answer. The server I have been 
looking at was hopelessly insecure and SQL injection is only one of its 
problems. There were several other ways in! Assume, for example, an attacker 
can write his own script directly into the website document tree. In this 
case prepared queries don't help protect what's in the database. The attacker 
can use them himself if he likes!
I don't quite see how encrypted storage of data can solve your problem. 
Somehow the web application must be able to unlock/decrypt the data. 
Either on a per session level, or by passing in a key with every query. 
Giving out the encrypt/decrypt keys to the end users, so that they have 
to supply them at login time, is probably as secure as putting them in 
cleartext onto the homepage. So they must be stored readable somewhere 
by the middleware system.

It does obscure the data a little more. At the same time it might give 
the Web application developer a completely false feeling of security.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] immutable stable volatile

2004-12-06 Thread Tom Lane
PostgreSQL general mail list [EMAIL PROTECTED] writes:
 if a function does insert/update/delete it needs to be stable or volatile ?
 if a immutable function executes 'nextval' should itself be also volatile ? 

A function that has side-effects must be marked volatile; there are
no exceptions.

PG 8.0 actually enforces this to some extent, but you can get burned in
any version if you ignore the rule.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread David Esposito
Thanks for the replies guys ... The copy of Bruce's book I was reading is
at:
http://www.postgresql.org/docs/aw_pgsql_book/hw_performance/node8.html and I
was mistaken, it recommends 25% of physical memory be allocated to the
shared cache .. Is there a better resource (even a commercial publication)
that I should've been looking through? Bruce's book is a little too
high-level and obviously leaves out some of the detail about the fact that
there is a practical maximum ... 

I will crank my shared_buffers down ... But how do I know what my sort_mem
setting should be? Are there statistics tables that indicate cache
hits/misses like in Oracle? 

Lastly, about the effective_cache_size ... If I cut down my shared buffers
to 10,000 like Tom recommended, and I assume that the OS itself and some
overhead for the sort_mem and vacuum mem takes up about 512MB total, should
I set the effective_cache_size to assume that the remaining 1.5 GB of
physical memory is being allocated for the file cache by the kernel?

Thanks,
Dave

 -Original Message-
 From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED] 
 Sent: Monday, December 06, 2004 10:39 AM
 To: David Esposito
 Cc: [EMAIL PROTECTED]
 Subject: Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3
 
 On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
  shared_buffers = 131072 (roughly 1GB)
  max_fsm_relations = 1  
  max_fsm_pages = 1000   
  sort_mem = 4096
  vacuum_mem = 262144
  Roughly 25 - 30 connections open (mostly idle) at any given time
  (connection pools)   
 
 I'd suggest reducing shared_buffers to maybe a few thousand, there's
 really no point reserving so much memory that way, it just a waste.
 
 Secondly, up your sort_mem a bit to reflact how big your sorts are
 likely to be.
 
 How's your effective_cache_size?
 -- 
 Martijn van Oosterhout   [EMAIL PROTECTED]   
 http://svana.org/kleptog/
  Patent. n. Genius is 5% inspiration and 95% perspiration. A 
 patent is a
  tool for doing 5% of the work and then sitting around 
 waiting for someone
  else to do the other 95% so you can sue them.
 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] SSL confirmation - (could not accept SSL connection: sslv3 alert certificate unknown )

2004-12-06 Thread Andrew M
Hi,
after much research to various mail-lists, it seems that there is a bug 
in the beta versions of postgreSQL 8 which is causing the problem. Is 
there anyway to create a SSLv3 specific certificate?

regards
Andrew
On 6 Dec 2004, at 12:28, Kris Jurka wrote:

On Mon, 6 Dec 2004, Andrew M wrote:
jdbc:postgresql://localhost:5432/mydatabase?ssl
When I launch Jboss, which handles the connection to postgresql, I get
the following error:
javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: No trusted certificate 
found

Difficult to say.  Perhaps JBoss is deciding to use an alternate
truststore?  Perhaps it doesn't like your cert setup.  I would suggest
first running a simple client program to test that it's working first.
Also adding -Djavax.net.debug=ssl to the java command will help 
debugging
ssl problems.

Further the 8.0 JDBC driver can create SSL connnections without doing
authentication by adding using an additional url parameter:
sslfactory=org.postgresql.ssl.NonValidatingFactory
You should try to get it to authenticate correctly, but this is another
useful test point.
Kris Jurka

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] reclaiming diskspace bloat w/near-zero downtime

2004-12-06 Thread Richard Ellis
On Fri, Dec 03, 2004 at 09:24:48AM -0700, Ed L. wrote:
 Sounds like 24x7x365 operations on a single cluster is maybe too
 tall of an order under these loads.  Maybe time for slony  friends
 for maintenance switchovers.

24x7x365 was never the true operation point.  All systems have/need
regularly scheduled downtime for service/maintence.  Even systems
that claimed five nines reliability did not include their regular
scheduled downtime in the five nines calculation.

http://www.bcr.com/bcrmag/2002/05/p22.php
See the Myth vs. Reality section.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SSL confirmation - (could not accept SSL connection:

2004-12-06 Thread Kris Jurka


On Mon, 6 Dec 2004, Andrew M wrote:

 after much research to various mail-lists, it seems that there is a bug 
 in the beta versions of postgreSQL 8 which is causing the problem. Is 
 there anyway to create a SSLv3 specific certificate?
 

How about sharing a little of that research with us.  The whole JDBC 
and SSL setup worked fine for me last time I tested it (mid-October) and I 
have little reason to believe it is broken now.  What bug are you 
seeing?

Kris Jurka

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] SSL confirmation - (could not accept SSL connection:

2004-12-06 Thread Andrew M
Kris,
the information I base my statements on is from the mail-list:
http://www.mail-archive.com/pgsql-bugs@postgresql.org/msg10008.html
I can confirm that Jboss does not requested a keystore, and that it is 
the driver which is having a problem with the format of the 
certificate. I can confirm that server.key, server.crt, and root.crt 
need to be placed in the root of data:

/usr/local/pgsql/data
If you remove any of those files from the directory , postgreSQl does 
not startup. This is based on requesting ssl with -l.

The options left are to create an SSLv3 specific certificate, if that 
can be done?? or build version 7.4.* of postgreSQL

Any ideas
regards
Andrew
On 6 Dec 2004, at 19:53, Kris Jurka wrote:

On Mon, 6 Dec 2004, Andrew M wrote:
after much research to various mail-lists, it seems that there is a 
bug
in the beta versions of postgreSQL 8 which is causing the problem. Is
there anyway to create a SSLv3 specific certificate?

How about sharing a little of that research with us.  The whole JDBC
and SSL setup worked fine for me last time I tested it (mid-October) 
and I
have little reason to believe it is broken now.  What bug are you
seeing?

Kris Jurka
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Index scan vs. Seq scan on timestamps

2004-12-06 Thread Per Jensen
List,
PG version is 7.4.2
I log apache hits to a postgres server. The table layout is as follows:
apachelog=# \d accesslog
Table public.accesslog
  Column  |   Type   | Modifiers
--+--+---
 id   | integer  | not null
 ip   | character varying(15)|
 ident| character varying(200)   |
 auth | character varying(200)   |
 time | timestamp with time zone |
 request  | character varying(200)   |
 code | integer  |
 bytes| integer  |
 referrer | character varying(200)   |
 agent| character varying(200)   |
Indexes:
accesslog_pkey primary key, btree (id)
accesslog_time_idx btree (time)
Number of rows: approx: 530.000
If I want to count hits 30 days back this query uses an efficient
index scan:
select count(*)
from accesslog
where time  between (current_timestamp - INTERVAL '30 d') and 
current_timestamp;

Aggregate  (cost=8294.40..8294.40 rows=1 width=0)
   -  Index Scan using accesslog_time_idx on accesslog 
(cost=0.00..8287.97 rows=2569 width=0)
 Index Cond: ((time = (('now'::text)::timestamp(6) with time 
zone - '30 days'::interval)) AND (time = ('now'::text)::timestamp(6) 
with time zone))
(3 rows)

while this query uses a less efficient seq scan:
select count(*)
from accesslog
where time  between (timeofday()::timestamp - INTERVAL '30 d') and 
timeofday()::timestamp;

Aggregate  (cost=34966.56..34966.56 rows=1 width=0)
   -  Seq Scan on accesslog  (cost=0.00..34823.86 rows=57077 width=0)
 Filter: ((time = (((timeofday())::timestamp without time 
zone - '30 days'::interval))::timestamp with time zone) AND (time = 
((timeofday())::timestamp without time zone)::timestamp with time zone))
(3 rows)

Why does PG not use the index on the time column in the second select, 
timeofday() has been cast to a timestamp after all.

Any insight much appreciated.
Regards
Per
--
--
Per Jensenhttp://www.net-es.dk/~pj Linux rules!
--
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] When to encrypt

2004-12-06 Thread Greg Stark
Derek Fountain [EMAIL PROTECTED] writes:

 On Monday 06 December 2004 12:31, you wrote:
  Derek Fountain [EMAIL PROTECTED] writes:
   If another SQL Injection vulnerability turns up (which it might, given
   the state of the website code),
 
  You will never see another SQL injection vulnerability if you simply switch
  to always using prepared queries and placeholders.
 
 much wisdom snipped
 
 Indeed, but I'm still interested in the general answer. 

I would argue that never interpolating user-provided data into your query
strings _is_ the general answer. It's going through laborious case-by-case
quoting that's non-general and can fail if any single instance isn't done
properly. If you use always use placeholders then there's nothing you can fail
to do properly that would cause an injection vulnerability.

You could use something like perl's taint tracking to keep track of whether
the data used in the query string is tainted by user-provided data. This would
even let you use manual quoting since it lets you designate functions that
untaint strings. But even that seems risky to me. taintperl is liberal about
what it considers detainting. I prefer to allow only constant program-defined
strings to be used in my queries period.

 Given this type of mess, having logins, passwords, credit card info and the 
 like encrypted in the DB will add another layer of protection. The question 
 is, do people normally add this layer, just in case, or do they assume that 
 all the previous layers will do the job?

Layers are not useful unless they're effective. You can have 10 layers of 90%
effective security but it would be worthless. You still have an insecure
system.

The only useful way to use real-time encryption for a web server is public key
encryption for write-only data like credit card numbers. Usually the web
server really doesn't need access to existing credit card data. It only needs
to be able to add new credit card data or perhaps copy existing credit card
data.

So you could have the web server encrypt the credit card numbers using RSA and
store them in the database. Then only the credit card processing job which
might run on a highly secure dedicated box would pull the data and use the
private key to process the transactions.

The nice thing about this is that it isn't going to stop your web server or
database from being cracked, but it will limit the damage. The attacker can't
download a database of your entire customer base's credit card numbers.

Personally I think this is the only responsible way to run a system that keeps
credit card data. But sadly the rest of the world doesn't seem to agree.

By contrast, encryption is useful for non-live data such as database backups.
This lets you take them off-site and store them someplace without worrying
about someone walking off with your entire database. Or to discard the tapes
without worrying about someone reading your old data from the discarded tapes.
(Assuming of course that you don't write the key on the label...)

-- 
greg


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread Greg Stark

Pierre-Frédéric Caillaud [EMAIL PROTECTED] writes:

   One could always use ARRAY[name,name2]  ARRAY['b','a']
   But it is NOT index-friendly...

It won't use an existing two-column index but you can create an expression
index on array[name,name2] and this expression will use it. It won't work if
either column is NULL though.

-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Index scan vs. Seq scan on timestamps

2004-12-06 Thread Andrew - Supernews
On 2004-12-06, Per Jensen [EMAIL PROTECTED] wrote:
 Why does PG not use the index on the time column in the second select, 
 timeofday() has been cast to a timestamp after all.

timestamp is timestamp without time zone (not the most useful type in
the world). Your column is of type timestamp with time zone (correct).
The relationship between the two is not trivial and the lack of an index
scan therefore expected. Try casting to timestamp with time zone instead.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Auditing with shared username

2004-12-06 Thread Eric E
Well, upon further reflection, I came to this conclusion:
In order to do trigger-based auditing that logs a homegrown user, you 
need to hand the database some token or identifier for the user that it 
can use to record the user into the audit log.  That part is pretty 
straightforward: if you're not using the db's login system, the db can't 
know which user this is unless you tell it.

The only relatively secure and general way I can see to do this is to 
have each table exposed by a view that adds an updating-username field 
to the basic fields, and then uses a rule to hand that username off to 
the auditing function.

Any thoughts?  Did anyone do this differently?  If so, how?
Thanks,
Eric
Ian Harding wrote:
I have a homegrown userid/password system in a database table, and on
tables I audit, I keep the id of the last person to touch that record,
and have a trigger write the changed values out to an audit table.  It
works fine, but of course there is some overhead involved.
You can't involve postgres connections as representing a user since any
connection pooling system will make that useless.  PG doesn't have
connection pooling, that is a higher level application function.  


 

Eric E [EMAIL PROTECTED] 12/06/04 8:58 AM 
   

Hi all,
   Like many folks who use three-tier design, I would like to create an
audit trail in my Postgres database, and I would like to do so without 
having to create a database user for each audit.

As I see it, there are two ways to do this, and I can't see a clear way 
to do either of them.  If anyone has better suggestions, I'd of course 
love to hear them.

Here's what I'd thought up:
1) Connect my homebrew login system which runs out of a couple database 
tables to postgres connection/sessionID (i.e., keep track of which 
sessionID represents my current user) so that any audit function can use

the session ID to look up the current user.
2) Maintain a current homebrew user session variable that is distinct 
from Postgres' current_user, which I believe stores the current database

user.   I found a couple threads on session variables, but mostly they 
were discouraging people from using such variables.

Does anyone have any good ideas or advice?
Also, both of these methods require that a user maintain his/her own 
session.  I don't know how PG's connection pooling works, but is it 
actually possible to specify a particular session for a particular 
user?  Is there some place I can find documentation on how Postgres 
deals with logins and sessions?

Many thanks,
Eric
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
 joining column's datatypes do not match
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Index scan vs. Seq scan on timestamps

2004-12-06 Thread Per Jensen
Andrew - Supernews wrote:
On 2004-12-06, Per Jensen [EMAIL PROTECTED] wrote:
Why does PG not use the index on the time column in the second select, 
timeofday() has been cast to a timestamp after all.

timestamp is timestamp without time zone (not the most useful type in
the world). Your column is of type timestamp with time zone (correct).
The relationship between the two is not trivial and the lack of an index
scan therefore expected. Try casting to timestamp with time zone instead.
Andrew,
thanks for your fast reply.
explain
select count(*)
from accesslog
where time  between (timeofday()::timestamptz - INTERVAL '30 d') and 
timeofday()::timestamptz;

gives
 Aggregate  (cost=32398.12..32398.12 rows=1 width=0)
   -  Seq Scan on accesslog  (cost=0.00..32255.42 rows=57077 width=0)
 Filter: ((time = ((timeofday())::timestamp with time zone - 
'30 days'::interval)) AND (time = (timeofday())::timestamp with time 
zone))

Still a seq scan
/Per
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] When to encrypt

2004-12-06 Thread Martijn van Oosterhout
On Mon, Dec 06, 2004 at 04:07:25PM -0500, Greg Stark wrote:
 By contrast, encryption is useful for non-live data such as database backups.
 This lets you take them off-site and store them someplace without worrying
 about someone walking off with your entire database. Or to discard the tapes
 without worrying about someone reading your old data from the discarded tapes.
 (Assuming of course that you don't write the key on the label...)

Actually, hard disk encryption is useful for one thing: so if somebody
kills the power and takes the hard disk/computer, the data is safe.
While it's running it's vulnerable though...
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpTFqDf23LPm.pgp
Description: PGP signature


[GENERAL] Detecting Temporary Tables

2004-12-06 Thread Jamie Deppeler
Is it possible to detect the instance of a Temporary through a function?
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] SSL confirmation - (could not accept SSL connection:

2004-12-06 Thread Tom Lane
Andrew M [EMAIL PROTECTED] writes:
 the information I base my statements on is from the mail-list:

 http://www.mail-archive.com/pgsql-bugs@postgresql.org/msg10008.html

AFAICT that thread describes pilot error compounded by poor error
messages from our SSL code.  The error message problem, at least,
should be fixed in 8.0RC1.  If you can demonstrate a problem in RC1,
let's see specifics.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] More problems

2004-12-06 Thread Jamie Deppeler
Hi,
Newbie question
I have a result set returned through a function or rule, i know i will 
have process the result set through a for loop, my question is how can i 
get length of result set?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] SSL confirmation - (could not accept SSL connection:

2004-12-06 Thread Andrew M
Hi Tom,
below is a breakdown of the problems i am experiencing with SSL
Connection String:
jdbc:postgresql://localhost:5432/mydatabase?ssl
my Driver is:
DriverVersion PostgreSQL 8.0devel JDBC3 with SSL (build 308)
The error message i'm getting is:

Code:
14:12:56,779 WARN  [SettingsFactory] Could not obtain connection  
metadata
org.jboss.util.NestedSQLException: Could not create connection; -  
nested throwable: (org.postgresql.
util.PSQLException: The connection attempt failed.); - nested  
throwable: (org.jboss.resource.JBossRe
sourceException: Could not create connection; - nested throwable:  
(org.postgresql.util.PSQLException
: The connection attempt failed.))
at  
org.jboss.resource.adapter.jdbc.WrapperDataSource.getConnection(WrapperD 
ataSource.java:10
6)
at  
net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(D 
atasourceConnecti
onProvider.java:59)
at  
net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java: 
73)
at  
net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java: 
1132)
at  
net.sf.hibernate.cfg.Configuration.buildSessionFactory(Configuration.jav 
a:766)
at  
org.jboss.hibernate.jmx.Hibernate.buildSessionFactory(Hibernate.java: 
476)
at  
org.jboss.hibernate.jmx.Hibernate.startService(Hibernate.java:444)
at  
org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupp 
ort.java:271)
at  
org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBean 
Support.java:221)

at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at  
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor 
Impl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at  
org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher. 
java:141)
at org.jboss.mx.server.Invocation.dispatch(Invocation.java:80)
at org.jboss.mx.server.Invocation.invoke(Invocation.java:72)
at  
org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.jav 
a:242)
at  
org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:642)
at  
org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController 
.java:891)
at $Proxy0.start(Unknown Source)
at  
org.jboss.system.ServiceController.start(ServiceController.java:416)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at  
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor 
Impl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at  
org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher. 
java:141)
at org.jboss.mx.server.Invocation.dispatch(Invocation.java:80)
at org.jboss.mx.server.Invocation.invoke(Invocation.java:72)
at  
org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.jav 
a:242)
at  
org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:642)
at  
org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:176)
at $Proxy4.start(Unknown Source)
at org.jboss.deployment.SARDeployer.start(SARDeployer.java:261)
at  
org.jboss.deployment.MainDeployer.start(MainDeployer.java:935)
at  
org.jboss.deployment.MainDeployer.start(MainDeployer.java:927)
at  
org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:746)
at  
org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:709)
at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
at  
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor 
Impl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at  
org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher. 
java:141)
at org.jboss.mx.server.Invocation.dispatch(Invocation.java:80)
at  
org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor. 
java:119)
at org.jboss.mx.server.Invocation.invoke(Invocation.java:74)
at  
org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBea 
nOperationInterce
ptor.java:131)
at org.jboss.mx.server.Invocation.invoke(Invocation.java:74)
at  
org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.jav 
a:242)
at  
org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:642)
at  
org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:176)
at $Proxy8.deploy(Unknown Source)
at  
org.jboss.deployment.scanner.URLDeploymentScanner.deploy(URLDeploymentSc 
anner.java:305)
at  
org.jboss.deployment.scanner.URLDeploymentScanner.scan(URLDeploymentScan 
ner.java:481)
at  
org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.doS 
can(AbstractDeplo
ymentScanner.java:204)
at  

Re: [GENERAL] When to encrypt

2004-12-06 Thread Greg Stark
Martijn van Oosterhout [EMAIL PROTECTED] writes:

 Actually, hard disk encryption is useful for one thing: so if somebody
 kills the power and takes the hard disk/computer, the data is safe.
 While it's running it's vulnerable though...

Where do you plan to keep the key?

-- 
greg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] SSL confirmation - (could not accept SSL connection:

2004-12-06 Thread Tom Lane
Andrew M [EMAIL PROTECTED] writes:
 below is a breakdown of the problems i am experiencing with SSL
 ...
 Caused by: javax.net.ssl.SSLHandshakeException:  
 sun.security.validator.ValidatorException: No truste
 d certificate found

It would appear that either you didn't put a root certificate into the
server's $PGDATA/root.crt file, or the server certificate you put into
$PGDATA/server.crt isn't signed by any of the CAs that the client code
thinks are trusted.  I have no idea where the javax.net.ssl library
looks for trusted certs; possibly you could find someone more clueful
about that on pgsql-jdbc.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Network authentication

2004-12-06 Thread Bob Parnes
I am having trouble connecting to a database on a debian server from a 
client system, also debian. Acccording to the documentation, this is
possible without a password and offers the following example,

hosttemplate1   all 192.168.93.0  255.255.255.0 \
  ident sameuser

My pg_hba.conf file has a line,

hostall all 192.168.1.0   255.255.255.0 \
  ident sameuser

Also, the client system has an account with the same name and password
as an account on the server. However, connection fails with psql after 
logging into that account. I can, however, ssh to the server and connect 
from there; but there are reasons why I don't want to use ssh with this
project. Anyway, is the documentation wrong, or am I misinterpreting 
something? 

In addition I noticed that if I have a second line in the pg_hba.conf file,

hostall all 192.168.1.0   255.255.255.0 md5

coming before the other line, I can connect to the server database using
a password. However, if it follows the line, I cannot. Am I doing 
something wrong here also?

Thanks for any help.


-- 
Bob Parnes
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] 3rd RFD: comp.databases.postgresql (was: comp.databases.postgresql.*)

2004-12-06 Thread Brian {Hamilton Kelly}
On Saturday, in article
 [EMAIL PROTECTED]
 [EMAIL PROTECTED] Robert McClenon wrote:

 I think that the term that is occasionally used is that the hierarchy
 has a hierarchy czar.  That is the most straightforward way to manage
 a hierarchy.  I did not say that it was the best or the worst, only
 the most straightforward.  It doesn't address the question of what
 happens if the czar disappears, for instance.

Seventy-five years' rule by Soviet?

-- 
Brian {Hamilton Kelly} [EMAIL PROTECTED]
   I don't use Linux. I prefer to use an OS supported by a large multi-
   national vendor, with a good office suite, excellent network/internet
   software and decent hardware support.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Paul Tillotson
snip
... under the periods of heavy swapping, one or more of the postgres
processes would be way up there (between 500MB and 1000MB (which would
easily explain the swapping)) ... the question is: why aren't all of the
processes sharing the same pool of shared memory since I thought that's what
I'm doing when adjusting the shared_buffers property?
 

snip
I seem to remember hearing that the memory limit on certain operations, 
such as sorts, is not enforced (may the hackers correct me if I am 
wrong); rather, the planner estimates how much a sort might take by 
looking at the statistics for a table.

If the statistics are wrong, however, the sort doesn't actually stay 
within sort memory, and so the process consumes a very large amount of 
memory, much more than the sort_mem configuration parameter should allow 
it to.

If the other suggestions given (to reduce the shared buffers) don't fix 
it, I suggest running ANALYZE all your tables and see if the erratic 
memory usage goes away. If that doesn't help, then try to figure out 
what query is causing the high memory usage, and run EXPLAIN ANALYZE on 
just that query to see if it is returning drastically more rows than the 
planner thinks it will.

Paul Tillotson
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Network authentication

2004-12-06 Thread Doug McNaught
Bob Parnes [EMAIL PROTECTED] writes:

 I am having trouble connecting to a database on a debian server from a 
 client system, also debian. Acccording to the documentation, this is
 possible without a password and offers the following example,

 hosttemplate1   all 192.168.93.0  255.255.255.0 \
   ident sameuser

You need to be running an ident daemon on the client machine, and also
to be aware of the security issues involved with ident.

 In addition I noticed that if I have a second line in the pg_hba.conf file,

 hostall all 192.168.1.0   255.255.255.0 md5

 coming before the other line, I can connect to the server database using
 a password. However, if it follows the line, I cannot. Am I doing 
 something wrong here also?

Only the first matching line in pg_hba.conf is used.

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Neil Conway
On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
 I seem to remember hearing that the memory limit on certain operations, 
 such as sorts, is not enforced (may the hackers correct me if I am 
 wrong); rather, the planner estimates how much a sort might take by 
 looking at the statistics for a table.
 
 If the statistics are wrong, however, the sort doesn't actually stay 
 within sort memory, and so the process consumes a very large amount of 
 memory, much more than the sort_mem configuration parameter should allow 
 it to.

AFAIK this is not the case. sort_mem defines the in-memory buffer used
_per_ sort operation. The problem you may be referring to is that
multiple concurrent sort operations (possibly within a single backend)
will each consume up to sort_mem, so the aggregate memory usage for sort
operations may be significantly higher than sort_mem.

-Neil



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] hooks for supporting third party blobs?

2004-12-06 Thread Eric Davies


A recent project of ours involved storing/fetching some reasonably large
datasets in a home-brew datatype. The datasets tended to range from
a few megabytes, to several gigabytes. We were seeing some nonlinear
slowness with using native large objects with larger datasets, presumably
due to the increasing depth of the btree index used to track all the
little pieces of the blobs.
After some careful consideration, we implemented an alternative to large
objects, a system based on storing files in a particular directory, and
storing a reference to the files in the database. It worked and gave us
good and consistent performance. However, it doesn't support transactions
(no isolation, no rollback). We can probably implement some backend code
to support such functionality, but the trick is getting the postgres
server to keep our code in the loop (so to speak) about when a rollback
should be done (and to when). 
Is anyone aware of any hooks to support schemes such as ours, or has
solved a similar problem?
Thank you.


** 
Eric Davies, M.Sc. 
Barrodale Computing Services Ltd. 
Tel: (250) 472-4372 Fax: (250) 472-4373 
Web:
http://www.barrodale.com

Email: [EMAIL PROTECTED] 
** 
Mailing Address: 
P.O. Box 3075 STN CSC 
Victoria BC Canada V8W 3W2
Shipping Address: 
Hut R, McKenzie Avenue 
University of Victoria 
Victoria BC Canada V8W 3W2 
**





Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Alvaro Herrera
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
 On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
  I seem to remember hearing that the memory limit on certain operations, 
  such as sorts, is not enforced (may the hackers correct me if I am 
  wrong); rather, the planner estimates how much a sort might take by 
  looking at the statistics for a table.

 AFAIK this is not the case.

AFAIK this is indeed the case with hashed aggregation, which uses the
sort_mem (work_mem) parameter to control its operation, but for which it
is not a hard limit.

I concur however that multiple concurrent sorts may consume more memory
than the limit specified for one sort.  (Just last week I saw a server
running with sort_mem set to 800 MB ... no wonder the server went belly
up every day at 3.00am, exactly when a lot of reports were being
generated)

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Acepta los honores y aplausos y perderás tu libertad

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] hooks for supporting third party blobs?

2004-12-06 Thread Alvaro Herrera
On Mon, Dec 06, 2004 at 05:11:21PM -0800, Eric Davies wrote:

 Is anyone aware of any hooks to support schemes such as ours, or has solved 
 a similar problem?

There's RegisterXactCallback() and RegisterSubXactCallback() functions
that may be what you want.  They are called whenever a transaction or
subtransaction starts, commits, or aborts.  You could probably keep a
list of things modified during the transaction, so you can clean up at
transaction end.

(Much like the storage manager does: it only unlinks files for dropped
tables at transaction commit.)

Make sure to react appropiately at subtransaction abort ...

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Si quieres ser creativo, aprende el arte de perder el tiempo

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Neil Conway
On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
 AFAIK this is indeed the case with hashed aggregation, which uses the
 sort_mem (work_mem) parameter to control its operation, but for which it
 is not a hard limit.

Hmmm -- I knew we didn't implement disk-spilling for hashed aggregation,
but I thought we had _some_ sane means to avoid consuming a lot of
memory if we got the plan completely wrong. AFAICS you are right, and
this is not the case :-( We definitely ought to fix this.

-Neil



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Index scan vs. Seq scan on timestamps

2004-12-06 Thread Stephan Szabo

On Mon, 6 Dec 2004, Per Jensen wrote:

 select count(*)
 from accesslog
 where time  between (timeofday()::timestamp - INTERVAL '30 d') and
 timeofday()::timestamp;

Besides the type issue, timeofday() is volatile and thus is not allowed to
be turned into a constant in order to do an index scan because it's
allowed to return different values for every row of the input.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Index scan vs. Seq scan on timestamps

2004-12-06 Thread Stephen Frost
* Stephan Szabo ([EMAIL PROTECTED]) wrote:
 On Mon, 6 Dec 2004, Per Jensen wrote:
  select count(*)
  from accesslog
  where time  between (timeofday()::timestamp - INTERVAL '30 d') and
  timeofday()::timestamp;
 
 Besides the type issue, timeofday() is volatile and thus is not allowed to
 be turned into a constant in order to do an index scan because it's
 allowed to return different values for every row of the input.

Is there a way to say just take the value of this function at the start
of the transaction and then have it be constant in a query?

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Index scan vs. Seq scan on timestamps

2004-12-06 Thread Andrew - Supernews
On 2004-12-07, Stephen Frost [EMAIL PROTECTED] wrote:
 Is there a way to say just take the value of this function at the start
 of the transaction and then have it be constant in a query?

Why not use CURRENT_TIMESTAMP, etc., which do exactly that?

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Triggers don't activate when dropping table

2004-12-06 Thread cheng shan
I have one table named "object" and imany tables that are related to the table "object". When a record in "object" is deleted, the relative records in other tables should also be erased. To implement the abovegoal, i define many triggers on table "object" to make operationsautomatically.

But i find that the triggers don't activate when i dropping the table "object". So there is many garbage information left in the system which makes the system unstable.

Could anyone help?
Thank you!
 simon
Do You Yahoo!?


Re: [GENERAL] Detecting Temporary Tables

2004-12-06 Thread Michael Fuhr
On Tue, Dec 07, 2004 at 09:22:51AM +1100, Jamie Deppeler wrote:

 Is it possible to detect the instance of a Temporary through a function?

What are you trying to do?

Temporary tables are stored in the pg_temp_NNN schema (e.g.,
pg_temp_1).  See the System Catalogs chapter in the PostgreSQL
documentation for information on finding a table's schema.  If
you're using 7.4 or later then see also the Information Schema
chapter -- one of its views has two fields that should be helpful.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Paul Tillotson
Alvaro Herrera wrote:
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
 

On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
   

I seem to remember hearing that the memory limit on certain operations, 
such as sorts, is not enforced (may the hackers correct me if I am 
wrong); rather, the planner estimates how much a sort might take by 
looking at the statistics for a table.
 

 

AFAIK this is not the case.
   

AFAIK this is indeed the case with hashed aggregation, which uses the
sort_mem (work_mem) parameter to control its operation, but for which it
is not a hard limit.
I concur however that multiple concurrent sorts may consume more memory
than the limit specified for one sort.  (Just last week I saw a server
running with sort_mem set to 800 MB ... no wonder the server went belly
up every day at 3.00am, exactly when a lot of reports were being
generated)
 

Does postgres actually do multiple concurrent sorts within a single 
backend?  I didn't think it would ever do this, since each backend has 
only a single thread.  David says that he sees a particular process 
start to consume very large amounts of memory, and from my understanding 
of postgres, this must be one single query taking a lot of memory, not 
multiple concurrent sorts.

Paul Tillotson
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Index bloat in 7.2

2004-12-06 Thread Christopher Browne
Quoth [EMAIL PROTECTED] (Alvaro Herrera):
 On Mon, Dec 06, 2004 at 08:48:04AM -, Julian Scarfe wrote:

 b) Only a dump-restore major version upgrade (which we'll do next time we
 can take the system out for long enough) will avoid the issue.

 Long enough could be a minutes or seconds issue if you use Slony-I,
 I've heard ...  (Of course you'd still need to fix your apps, which may
 take somewhat longer than that.)

Unfortunately, Slony-I does not support versions of PostgreSQL earlier
than 7.3.3.  It needs namespace support...
-- 
(format nil [EMAIL PROTECTED] cbbrowne gmail.com)
http://www.ntlug.org/~cbbrowne/advocacy.html
How come you don't ever hear about gruntled employees? 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] When to encrypt

2004-12-06 Thread Christopher Browne
The world rejoiced as [EMAIL PROTECTED] (Martijn van Oosterhout) wrote:
 On Mon, Dec 06, 2004 at 04:07:25PM -0500, Greg Stark wrote:
 By contrast, encryption is useful for non-live data such as
 database backups.  This lets you take them off-site and store them
 someplace without worrying about someone walking off with your
 entire database. Or to discard the tapes without worrying about
 someone reading your old data from the discarded tapes.  (Assuming
 of course that you don't write the key on the label...)

 Actually, hard disk encryption is useful for one thing: so if
 somebody kills the power and takes the hard disk/computer, the data
 is safe.  While it's running it's vulnerable though...

Why do you think that's useful in limiting vulnerability?

In order for the system to mount the filesystem, the key has got to be
there.

If it's a highly available system, it's not acceptable for the
system to have to wait for a sysadmin to type in a decryption key, so
the key has to be sitting there, vulnerable to theft.

Given some sort of secure crypto hardware (nCipher, Sun Crypto
Accelerator, and such), it's possible to make the system reasonably
tamper-resistant, but the costs are pretty hefty, and tamper
resistance requires leaping back into the risk that a power outage
would require manual intervention to reinitialize the cryptographic
device.

This is a big problem: You can't just apply cryptography onto things
like you would add peanut butter to a sandwich and expect to actually
get security.  It is eminently easy for a cryptographic system to only
provide the _impression_ of security.
-- 
let name=cbbrowne and tld=gmail.com in String.concat @ [name;tld];;
http://linuxfinances.info/info/crypto.html
It is usually a   good idea to  put  a capacitor of a  few microfarads
across the output, as shown.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] More problems

2004-12-06 Thread Michael Fuhr
On Tue, Dec 07, 2004 at 09:44:44AM +1100, Jamie Deppeler wrote:

 I have a result set returned through a function or rule, i know i will 
 have process the result set through a for loop, my question is how can i 
 get length of result set?

Do you want to know how many rows a query will return?  That depends
on what language you're using and how you're making the query.
Please provide more details about what you're trying to do.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] When to encrypt

2004-12-06 Thread Christopher Browne
Why do you think that's useful in limiting vulnerability?

In order for the system to mount the filesystem, the key has got to be
there.

If it's a highly available system, it's not acceptable for the
system to have to wait for a sysadmin to type in a decryption key, so
the key has to be sitting there, vulnerable to theft.

Given some sort of secure crypto hardware (nCipher, Sun Crypto
Accelerator, and such), it's possible to make the system reasonably
tamper-resistant, but the costs are pretty hefty, and tamper
resistance requires leaping back into the risk that a power outage
would require manual intervention to reinitialize the cryptographic
device.

This is a big problem: You can't just apply cryptography onto things
like you would add peanut butter to a sandwich and expect to actually
get security.  It is eminently easy for a cryptographic system to only
provide the _impression_ of security.
-- 
let name=cbbrowne and tld=gmail.com in String.concat @ [name;tld];;
http://linuxfinances.info/info/internet.html
It is usually a   good idea to  put  a capacitor of a  few microfarads
across the output, as shown.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Triggers don't activate when dropping table

2004-12-06 Thread Michael Fuhr
On Tue, Dec 07, 2004 at 11:44:58AM +0800, cheng shan wrote:

 I have one table named object and i many tables that are related
 to the table object. When a record in object is deleted, the
 relative records in other tables should also be erased.  To implement
 the above goal,  i define many triggers on table object to make
 operations automatically.
  
 But i find that the triggers don't activate when i dropping the
 table object. So there is many garbage information left in the
 system which makes the system unstable.

If you want to fire the triggers then delete all records from the
table before dropping it.  But why are you dropping the table?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Rules

2004-12-06 Thread Michael Fuhr
On Mon, Dec 06, 2004 at 04:15:54PM +1100, Jamie Deppeler wrote:

 i have a join table eg
 
 primarykey
 field1
 field2
 
 and based on SQL Select have the values of field1 inserted into a new 
 table which will scroll which will launch a trigger but im having 
 problems getting this to work? Any ideas anyone?

I'm not sure I follow.  Could you post examples of what you've
tried, describe what you want to happen, and tell us what actually
did happen?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
 AFAIK this is indeed the case with hashed aggregation, which uses the
 sort_mem (work_mem) parameter to control its operation, but for which it
 is not a hard limit.

 Hmmm -- I knew we didn't implement disk-spilling for hashed aggregation,
 but I thought we had _some_ sane means to avoid consuming a lot of
 memory if we got the plan completely wrong.

The *sort* code is fairly good about respecting sort_mem.  The *hash*
code is not so good.

 We definitely ought to fix this.

Bear in mind that the price of honoring sort_mem carefully is
considerably far from zero.  (Or, if you know how to do it cheaply,
let's see it ...)

The issue with the hash code is that it sets size parameters on the
basis of the estimated input row count; the memory usage error factor
is basically inversely proportional to the error in the planner's row
estimate.  The seriously bad cases I've seen reported were directly
due to horribly-out-of-date planner table size estimates.  A large part
of the rationale for applying that last-minute 8.0 change in relpages/
reltuples handling was to try to suppress the worst cases in hashtable
size estimation.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] hooks for supporting third party blobs?

2004-12-06 Thread Tom Lane
Eric Davies [EMAIL PROTECTED] writes:
 A recent project of ours involved storing/fetching some reasonably large 
 datasets in a home-brew datatype.  The datasets tended to range from a few 
 megabytes, to several gigabytes. We were seeing some nonlinear slowness 
 with using native large objects with larger datasets, presumably due to the 
 increasing depth of the btree index used to track all the little pieces of 
 the blobs.

Did you do any profiling to back up that presumably?  It seems at
least as likely to me that this was caused by some easily-fixed
inefficiency somewhere.  There are still a lot of O(N^2) algorithms
in the backend that no one has run up against yet ...

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Tom Lane
Paul Tillotson [EMAIL PROTECTED] writes:
 Does postgres actually do multiple concurrent sorts within a single 
 backend?

Certainly.  Consider for example a merge join with each input being
sorted by an explicit sort step.  DISTINCT, ORDER BY, UNION, and related
operators require their own sort steps in the current implementation.
It's not difficult to invent queries that require arbitrarily large
numbers of sort steps.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Neil Conway
On Mon, 2004-12-06 at 23:55 -0500, Tom Lane wrote:
 Bear in mind that the price of honoring sort_mem carefully is
 considerably far from zero.

I'll do some thinking about disk-based spilling for hashed aggregation
for 8.1

 The issue with the hash code is that it sets size parameters on the
 basis of the estimated input row count; the memory usage error factor
 is basically inversely proportional to the error in the planner's row
 estimate.

Right. But I don't think it's acceptable to consume an arbitrary amount
of memory to process a query, even if we only do that when the planner
makes a mistake (regrettably, planner mistakes occur with some
regularity).

As a quick hack, what about throwing away the constructed hash table and
switching to hashing for sorting if we exceed sort_mem by a significant
factor? (say, 200%) We might also want to print a warning message to the
logs.

This assumes that aggregation-by-sorting can be used in a superset of
the cases where aggregation-by-hashing can be used, and that the
semantics of both nodes are the same; I believe both conditions hold.
And of course, performance will likely suck -- but (a) since the planner
has guessed wrong performance is probably going to suck anyway (b) it is
better than running the machine OOM.

-Neil



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] migrating from informix

2004-12-06 Thread Jaime Casanova
Hi,

someone has successfully migrated a database from
informix to postgresql?

there are any tools that helps or maybe an script?

regards,
Jaime Casanova

_
Do You Yahoo!?
Información de Estados Unidos y América Latina, en Yahoo! Noticias.
Visítanos en http://noticias.espanol.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3

2004-12-06 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 As a quick hack, what about throwing away the constructed hash table and
 switching to hashing for sorting if we exceed sort_mem by a significant
 factor? (say, 200%) We might also want to print a warning message to the
 logs.

If I thought that a 200% error in memory usage were cause for a Chinese
fire drill, then I'd say yeah, let's do that.  The problem is that the
place where performance actually goes into the toilet is normally an
order of magnitude or two above the nominal sort_mem setting (for
obvious reasons: admins can't afford to push the envelope on sort_mem
because of the various unpredictable multiples that may apply).  So
switching to a hugely more expensive implementation as soon as we exceed
some arbitrary limit is likely to be a net loss not a win.

If you can think of a spill methodology that has a gentle degradation
curve, then I'm all for that.  But I doubt there are any quick-hack
improvements to be had here.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Triggers don't activate when dropping table

2004-12-06 Thread cheng shan
Note: forwarded message attached.
Do You Yahoo!?
---BeginMessage---



The reason I drop the table directly is that:

I am developing a graphical information system.There are many tables inherit from objectin the system, such as "map", "line", "rect", "circle", etc. In most time, I use tables inherit from"object" instead of table "object".

I group the "objects" in such manners as "map" contains the drawable objects, such as "rect", "circle". And some "object" maybe consists of other "objects", such as a"rect" is assembled by four"line". Besides there are also many relationships between "objects".

Our principle is to construct a model that reflects the association among objects. We usetable"relationship" andtables thatinherits from itto record the OID ofobjects that participate this "relationship".And also, every "object" has a field to record the "relationship"s it participates in.To maintain the above constraint automatically, I define many triggers on every "object" table and every "relationship" table.

To make the system extensible, we provides interface make_object_type( ) topermit the user to create the "object type" inherits from "object". And if the user think the existing table is unappropriate, he may drop it using function destroy_object_type( )if he has sufficient priviledge.

In the first version, the function destroy_object_type( ) just drop the table only. But when I find the triggers havn't beenactived yet, I add the delete setense. This time it becomes even worse, the system throws error message.ERROR: could not open relation with OID 1390714.

To avoid the fatal error, I have no choice but to rollback the function to the original version.

I have written a simplified test cast to verify the system, but it performs as I expected and it's wrong.

Could you help me?
Thanks!Michael Fuhr [EMAIL PROTECTED] wrote: 
On Tue, Dec 07, 2004 at 11:44:58AM +0800, cheng shan wrote: I have one table named "object" and i many tables that are related to the table "object". When a record in "object" is deleted, the relative records in other tables should also be erased. To implement the above goal, i define many triggers on table "object" to make operations automatically.  But i find that the triggers don't activate when i dropping the table "object". So there is many garbage information left in the system which makes the system unstable.If you want to fire the triggers then delete all records from thetable before dropping it. But why are you dropping the table?-- Michael Fuhrhttp://www.fuhr.org/~mfuhr/---(end of broadcast)---TIP 9: the p
 lanner
 will ignore your desire to choose an index scan if yourjoining column's datatypes do not match
Do You Yahoo!?
/
Function Description:
_test_setup(): Initialize the environment
_test_teardown(): Clean up
_test_object_insert(): Insert records into object
_test_object_update(): Update records of object
_test_object_delete(): Delete records from object
_test_containment_insert(): Insert records into containment
_test_containmnet_delete(): Delete records from containment
_trigger_update_containment(): The trigger on table object 
that is used to update table containment
_trigger_update_object(): The trigger on table containment 
that is used to update table object
_test_delete_and_drop(): Delete records from object and drop table 
object

Notice the !!

Usage:
Before running any test, you should reload test.sql use the command 
\i test.sql for the function will be analysed and cached once it run. 
If the table was deleted, there will be errors.
See the results use the command SELECCT * FROM object; and 
SELECT * FROM containment to examine the records in table object
and table containment.

Test inserting records into table object:
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT * FROM object;
SELECT * FROM containment;

Test updating table object:
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_object_update();
SELECT * FROM object;
SELECT * FROM containment;

Test deleting records from object:
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_object_delete();
SELECT * FROM object;
SELECT * FROM containment;

Test inserting records into table containment:
\i test.sql;
SELECT _test_setup();
SELECT _test_object_insert();
SELECT _test_containment_insert();
SELECT * FROM object;
SELECT * FROM containment;

Test deleting records from table 

[GENERAL] Index on geometry and timestamp

2004-12-06 Thread Werdin Jens
Title: Index on geometry and timestamp






Hi,
I need an index on a postgis-point and a timestamp.
I'm using an GiST index on the geometry. But the creation of an GiST index on geometry and timestamp seems to be impossible, because GiST doesn't support Timestamps. 

Is there a possibility to solve my problem?


Greetings
Jens





[GENERAL] Drop table

2004-12-06 Thread Nageshwar Rao
Title: Message



Not able to drop a 
table,though nobody is accessing the table.I am able to insert the records and 
delete the records.When I give drop table it just hangs there .No error 
message.
any specific 
reasons
thx