[ 
https://issues.apache.org/jira/browse/DERBY-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12873126#action_12873126
 ] 

Rick Hillegas commented on DERBY-1482:
--------------------------------------

Hi Mike,

The handling of Java Objects stored in system tables was changed as part of the 
UDT work. For the details, see DERBY-4491. That issue brought the client 
behavior into agreement with the embedded behavior and the JDBC spec.

The Reference Guide already says that Derby-specific objects in the catalogs 
are not part of our public API. This disclaimer is tacked onto the description 
of every catalog column which contains Derby-specific objects. That means that 
those objects can change shape and behavior and that we make no guarantees of 
cross-version compatibility.  I don't think it's necessary to punch up the 
disclaimer but it wouldn't hurt.

Beyond that explicit disclaimer, we have never claimed that the catalogs will 
retain their current shape. We have always reserved the right to add, delete, 
and modify catalog columns. Maybe we haven't made that clear enough to users. 
However, we do say the following in the Reference Guide section titled "Derby 
system tables":

"You can query system tables, but you cannot alter them...The recommended way 
to get more information about these tables is to use an instance of the Java 
interface java.sql.DatabaseMetaData."

We let users query the catalogs because JDBC metadata is not rich enough for 
portability layers to introspect the capabilities of SQL databases and Derby 
has not implemented the Standard information schema.

Because we do let users query the catalogs, I think that we should include a 
10.7 release note warning users about the compatibility issues with 
SYSTRIGGERS.REFERENCEDCOLUMNS.

Thanks,
-Rick


> Update triggers on tables with blob columns stream blobs into memory even 
> when the blobs are not referenced/accessed.
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1482
>                 URL: https://issues.apache.org/jira/browse/DERBY-1482
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.2.1.6
>            Reporter: Daniel John Debrunner
>            Assignee: Mamta A. Satoor
>            Priority: Minor
>         Attachments: derby1482_patch1_diff.txt, derby1482_patch1_stat.txt, 
> derby1482_patch2_diff.txt, derby1482_patch2_stat.txt, 
> derby1482_patch3_diff.txt, derby1482_patch3_stat.txt, 
> derby1482DeepCopyAfterTriggerOnLobColumn.java, derby1482Repro.java, 
> derby1482ReproVersion2.java, junitUpgradeTestFailureWithPatch1.out, 
> TriggerTests_ver1_diff.txt, TriggerTests_ver1_stat.txt
>
>
> Suppose I have 1) a table "t1" with blob data in it, and 2) an UPDATE trigger 
> "tr1" defined on that table, where the triggered-SQL-action for "tr1" does 
> NOT reference any of the blob columns in the table. [ Note that this is 
> different from DERBY-438 because DERBY-438 deals with triggers that _do_ 
> reference the blob column(s), whereas this issue deals with triggers that do 
> _not_ reference the blob columns--but I think they're related, so I'm 
> creating this as subtask to 438 ]. In such a case, if the trigger is fired, 
> the blob data will be streamed into memory and thus consume JVM heap, even 
> though it (the blob data) is never actually referenced/accessed by the 
> trigger statement.
> For example, suppose we have the following DDL:
>     create table t1 (id int, status smallint, bl blob(2G));
>     create table t2 (id int, updated int default 0);
>     create trigger tr1 after update of status on t1 referencing new as n_row 
> for each row mode db2sql update t2 set updated = updated + 1 where t2.id = 
> n_row.id;
> Then if t1 and t2 both have data and we make a call to:
>     update t1 set status = 3;
> the trigger tr1 will fire, which will cause the blob column in t1 to be 
> streamed into memory for each row affected by the trigger. The result is 
> that, if the blob data is large, we end up using a lot of JVM memory when we 
> really shouldn't have to (at least, in _theory_ we shouldn't have to...).
> Ideally, Derby could figure out whether or not the blob column is referenced, 
> and avoid streaming the lob into memory whenever possible (hence this is 
> probably more of an "enhancement" request than a bug)... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to