Hi,
please see my comments inline.
Tom Lane wrote:
Julius Stroffek <[EMAIL PROTECTED]> writes:
Attached is the example script 'repro.sql' which creates two relations
tab1 and tab2. It also creates a rule on tab1 which simply does insert
into tab2. The insert statement into tab1 is executed afterwards. It
responds with 'INSERT 0 1'. However if I would create the same rule with
the where clause the response to the same insert statement is 'INSERT 0
0'. The output of the script executed through psql is in 'repro.out'.
Is this a bug?
No. See
http://www.postgresql.org/docs/8.2/static/rules-status.html
and note that you don't have an unconditional INSTEAD rule.
I explored this and agree that the current PostgreSQL behavior as
described in the above link is correct.
However, this behavior is a serious issue when using Java Persistence
through Hibernate (and probably using other providers as well). I have
created a simple application running on glassfish just inserting records
to the tables using Hibernate and PostgreSQL as a persistence provider.
If I would use partitioning of the tables all the insert transactions
would be marked for rollback and would be rolled back. After playing a
bit with a very ugly code of glassfish trying to change it to report the
root cause why the transactions are marked for roll back I discovered
that Hibernate uses Statement.executeBatch method to execute the sql
statements which then is supposed to return the number of rows affected
by the passed statements. Hibernate then compares the value returned by
this function with the number of records it passes to the batch. The
number of affected rows is determined in PostgreSQL JDBC driver in
QueryExecutorImpl.interpretCommandStatus method by parsing the command
status string returned.
The JDBC javadoc at
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/Statement.html#executeBatch()
describes the behavior and allows to return Statement.SUCCESS_NO_INFO in
a case where the number of rows is unknown. More severe issue is that
JDBC spec requires a set of methods Statement.executeUpdate which are
forced to return the number of rows affected and can not return
Statement.SUCCESS_NO_INFO.
Any thoughts how to deal with this issue? Was there a discussion on this
already in JDBC driver team?
There is only one option that comes to my mind - always return
Statment.SUCCESS_NO_INFO in executeBatch (or possibly only depending on
some java property). I can not see any simple solution for
Statement.executeUpdate since the number of rows affected may differ
depending on the rules and might be also difficult to calculate.
Thanks
Cheers
Julo
create table tab1 (id int);
create table tab2 (id int);
create rule tab1_insert as on insert to tab1 do instead insert into tab2 values (new.id);
insert into tab1 values (100);
select * from tab2;
drop rule tab1_insert on tab1;
create rule tab1_insert as on insert to tab1 where 1<2 do instead insert into tab2 values (new.id);
delete from tab2;
select * from tab2;
insert into tab1 values (100);
select * from tab2;
drop rule tab1_insert on tab1;
drop table tab2;
drop table tab1;
CREATE TABLE
CREATE TABLE
CREATE RULE
INSERT 0 1
id
-----
100
(1 row)
DROP RULE
CREATE RULE
DELETE 1
id
----
(0 rows)
INSERT 0 0
id
-----
100
(1 row)
DROP RULE
DROP TABLE
DROP TABLE
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org