[ 
https://issues.apache.org/jira/browse/DERBY-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13009350#comment-13009350
 ] 

Mamta A. Satoor commented on DERBY-5121:
----------------------------------------

I just wanted to summarize the problem here to make it easier to understand the 
issue and the solution I am working on.

Some background information : With DERBY-1482, we decided to read only the 
columns really needed during trigger execution. Trigger only knows about 
trigger columns and columns in it's trigger action used through the REFERENCING 
clause. And so DERBY-1482 made the assumption that those will be the only 
columns read from the trigger table and it uses the relative column positions 
in that resultset to access the columns in it's trigger action when we generate 
the internal sql for the trigger action. The problem is that there can be cases 
when the SQL causing the trigger to fire needs to read more columns than just 
what the trigger needs. eg
create table t1( id int, name varchar( 50 ) );
create table t2
(
        name    varchar( 50 )   not null,
        description     int     not null,
        id      integer
);
insert into t2( name, description ) values ( 'Foo Name', 0 );

create trigger t2UpdateTrigger
after UPDATE of name
on t2
referencing 
new row as nr
for each ROW
insert into t1 values ( nr.id, nr.name );

The trigger above only needs columns "name" and "id" from the trigger table and 
hence it will assume that the runtime resultset will have just those 2 columns, 
"name" as first column in the resultset and "id" as the 2nd column in the 
resultset and using that assumption, it will change trigger action sql "insert 
into t1 values ( nr.id, nr.name )" to following "insert into t1 values ( 
CAST(org.apache.derby.iapi.db.Factory::getTriggerExecutionContext().getNewRow().getObject(2)
 AS INTEGER) , CAST 
(org.apache.derby.iapi.db.Factory::getTriggerExecutionContext().getNewRow().getObject(1)
 AS VARCHAR(50))  )

But the following triggering sql needs more columns than just  "name" and "id", 
it also needs "description".(The list of columns required by triggering sql are 
columns used by the triggering sql and columns needed by the triggers which 
will get fired).
update t2 set name = 'Another name' , description = 1;
So the runtime resulset will end up having columns "name", "description"  and 
"id". So the column "id" is the 3rd column in the resultset and not 2nd column 
in the resultset as expected by the trigger.

The solution I am working on is to see if we can map out only the columns 
needed by the trigger from the actual runtime resulset created by the 
triggering sql. 


> Data corruption when executing an UPDATE trigger
> ------------------------------------------------
>
>                 Key: DERBY-5121
>                 URL: https://issues.apache.org/jira/browse/DERBY-5121
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.7.1.1, 10.8.0.0
>            Reporter: Rick Hillegas
>            Assignee: Mamta A. Satoor
>              Labels: derby_triage10_8
>         Attachments: DummyProc.java, Test_5121.java, 
> derby5121_patch1_diff.txt, triggerBug.sql, triggerBug.sql, triggeredBug2.sql, 
> triggeredCorruption.sql
>
>
> When executing an UPDATE trigger, the following error is raised. I will 
> attach a test case:
> ERROR XCL12: An attempt was made to put a data value of type 
> 'org.apache.derby.impl.jdbc.EmbedClob' into a data value of type 'INTEGER'.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to