Hello Alexander,

Thank you, that's what I thought and helps to understand what happens. (Of course we can implement such a workaround.)

But as I understand it, this would only affect performance when LONG data is written, is that correct?

So if statement 'A' writes LONG data and statement 'B' is any other write operation (without LONG data), I think the following can happen:

        Thread 1:                       Thread 2:
A.executeUpdate (start)*
  ...
  ...                                     B.executeUpdate();
  ....
A.executeUpdate (end)*

(*one single statement that needs more time to be executed than statement B)

As the driver releases the LOCK on the Connection-Object between start/end, other Threads can execute statements using the same connection that will not affect statement A, as long as not the same database-table is concerned [or perhaps the same row of this table is concerned].

But if B.executeUpdate() writes to the same table [same row of that table], this could lead to the posted problem.

Is my understanding correct? Do you always synchronize on the Connection-Object?


If so, there would perhaps be a more efficient way to synchronize so that Thread-2 can still execute Updates as long as we know that this can not cause any problem with the statement A.executeUpdate (in the example above).

If my understanding is correct, I can perhaps propose a way to synchronize this (just as an idea - of course I don't know the details of the driver implementation and thus don't know if my proposal will be the best way).

Regards,
Gabriel Matter
Invoca Systems

Schroeder, Alexander wrote:
Hello Gabriel,

writing LONG data creates an internal subtransaction in the database, which may be harmed by other commands which were sent on the same connection at the time LONG data is sent in several packets.

The only workaround in the driver would be to synchronize the full execution of the statement, and not only synchronize the single request/reply operation. We *may*
do this, but we have to check the performance implications before making such a 
change,
as this would punish every execute/executeBatch with a synchronized block on the connection, and not very many people use a session from two threads at the same time.

A workaround would be to wrap calls to the executeXYZ methods with a synchronization block on the connection, that is, instead of

statement.execute();

use

synchronized(statement.getConnection()) {
    statement.execute();
}

Regards
Alexander Schröder
SAP DB, SAP Labs Berlin

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Mittwoch, 22. November 2006 11:41
To: maxdb@lists.mysql.com; Schroeder, Alexander
Cc: Paskamp, Marco
Subject: Re: [-3102]: Invalid subtrans structure / update / column of type LONG 
/ JDBC

Hello,

By the way: I guess that the reason for this error is the same like the one that I reported yesterday (Thread: Re: Strange: Error -7065 SAP DBTech SQL: [-7065] SUBTRANS COMMIT/ROLLBACK).

I did not yet have a look at the implementation for these methods:

PreparedStatement prep = ...; prep.setBytes(int parameterIndex, byte x[]); prep.executeUpdate();

But what I guess is, that writing the byte[] to the database is somehow handled asynchronously but not Thread save.

If I can provide code/a JDBC trace to reproduce the problem, I'll do, of course. Unfortunately I failed to reproduce the problem by extracting the portion of code that should produce the error (perhaps because it depends on the timing between the call of these statements).

Best regards,
Gabriel Matter

Schroeder, Alexander wrote:
Hello,

could you run the example with a JDBC trace? (And supply a definition of the
table that is affected?)

Regards
Alexander Schröder
SAP DB, SAP Labs Berlin

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Donnerstag, 4. Mai 2006 16:05
To: maxdb@lists.mysql.com
Subject: [-3102]: Invalid subtrans structure / update / column of type LONG
/ JDBC

Hello,

I guess that there is a bug that causes the above exception (error message "[-3102]: Invalid subtrans structure") when a row containing a column of type LONG is updated. But it seems that this error is thrown only under certain conditions.

- the exception occurs only if the value of the LONG-column is NOT NULL.
- the exception is thrown only when using a certain sequence of statements.

In our example the exception is thrown using the following sequence:

SELECT: we read the last row's primary key of that table
INSERT: we insert the new row containing some data for the column of type LONG [prep.setBytes(byte[])] UPDATE: we update the inserted row while all non-key columns are updated (altough the byte[]-content remains the same)

-> If we place another SELECT-statement between the above INSERT and UPDATE statement that reads the inserted row from the database, the exception is *not* thrown.

-> Altough the exception is thrown, the update is done correctly in our example

We use JDBC driver 7.6.0 Build 012-000-004-339 on MaxDB Kernel 7.5.0 Build 034-121-118-234

I found that there was a similar problem posted on 2003-03-03 and possibly the same problem was the cause of this report: http://lists.mysql.org/maxdb/19277 (28.11.2003)

Perhaps I'll be able to post a program/tabledefinition that can reproduce the problem.

Best regards,
Gabriel Matter




--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to