Re: [GENERAL] Data corruption problem...

2006-12-02 Thread Ragnar
I haven't seen any replies to this, so I will,
although I am not a j2ee/jdbc person.

On fim, 2006-11-30 at 14:14 -0700, Nathan Wilhelmi wrote:

 Server: 8.0.3 on Solaris 9
 JDBC Driver: 8.0.311
...
 delete from X where id in (select bad_id from Z where name='qwerty');
 delete from Y where id in (select bad_id from Z where name='qwerty');
 delete from Z where name='qwerty');
...
 Now we have J2EE (Tomcat hosted) app that access this database via 
 connection pool / JDBC driver.

tomcat version ?

 1) Start the J2EE app and do some work to access the database, although 
 I don't think this matters.
 2) Shut down the app server, and as result the connection pool starts to 
 shut down.
 3) Once the connection pool has started to shut down execute the 
 statements listed above from PGAdmin.
 4) PGAmin will hang for a second like it's waiting for locks.
 5) The statement will return normally, reporting the rows affected from 
 the last statement.
 6) Tables X and Y are now empty. This is not good.
 ...
 Has anyone seen this before, any suggestions how to debug 
 this or where to start looking?

I would start with enabling statement logging, and see
exactly what SQL the server is executing during the
scenario.

gnari



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Data corruption problem...

2006-12-01 Thread Nathan Wilhelmi
Hello - Ran into an interesting problem last week with data unexpectedly 
being deleted. Has anyone else experienced this and have any suggestions 
on how to prevent this in the future?


Here is the environment:

Server: 8.0.3 on Solaris 9
JDBC Driver: 8.0.311

This is the series of statements that cause the problem, but only under 
a certain condition listed below.


delete from X where id in (select bad_id from Z where name='qwerty');
delete from Y where id in (select bad_id from Z where name='qwerty');
delete from Z where name='qwerty');

The first 2 statements have an error, column bad_id does not exist in 
table Z. If I run this normally from PGAdmin the first 2 silently fail, 
and the last succeeds. In this case tables X and Y are untouched. An 
error for this would be nice but the data is OK, so far so good.


Now we have J2EE (Tomcat hosted) app that access this database via 
connection pool / JDBC driver. The problem scenario is as follows:


1) Start the J2EE app and do some work to access the database, although 
I don't think this matters.
2) Shut down the app server, and as result the connection pool starts to 
shut down.
3) Once the connection pool has started to shut down execute the 
statements listed above from PGAdmin.

4) PGAmin will hang for a second like it's waiting for locks.
5) The statement will return normally, reporting the rows affected from 
the last statement.

6) Tables X and Y are now empty. This is not good.

Now if I fix the first 2 statements the problem doesn't happen, I can 
execute it during shutdown and the data is OK. Obviously I would really 
prefer not to have times where executing a bad SQL statement deletes the 
whole table. Has anyone seen this before, any suggestions how to debug 
this or where to start looking?


Thanks!!!

-Nate










---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings