[
https://issues.apache.org/jira/browse/TRAFODION-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209701#comment-16209701
]
ASF GitHub Bot commented on TRAFODION-2775:
-------------------------------------------
Github user sureshsubbiah commented on a diff in the pull request:
https://github.com/apache/incubator-trafodion/pull/1267#discussion_r145482847
--- Diff: core/sql/exp/ExpHbaseInterface.cpp ---
@@ -842,7 +842,9 @@ Lng32 ExpHbaseInterface_JNI::insertRow(
transID = getTransactionIDFromContext();
retCode_ = client_->insertRow((NAHeap *)heap_, tblName.val, hbs_,
useTRex_, transID, rowID, row, timestamp,
- checkAndPut, asyncOperation, useRegionXn,
&htc);
+ checkAndPut, asyncOperation, useRegionXn,
+ 0, // checkAndPut is false, so colIndexToCheck
is not used
--- End diff --
The java code in HTableClient::putRow has this line of code which will
ensure that the number passed in for colIndexToCheck is meaningless when
checkAndPut is FALSE. -1 would have been easier to read. I can make that change
later, if you agree.
if (checkAndPut && colIndex == colIndexToCheck) {
> Insert does not raise duplicate row error for hbase format table with
> defaulted first column
> ---------------------------------------------------------------------------------------------
>
> Key: TRAFODION-2775
> URL: https://issues.apache.org/jira/browse/TRAFODION-2775
> Project: Apache Trafodion
> Issue Type: Bug
> Components: sql-exe
> Affects Versions: any
> Reporter: Suresh Subbiah
> Assignee: Suresh Subbiah
> Fix For: 2.3-incubating
>
>
> This issue was found by Gunnar Tapper and Carol Pearson.
> With HBase format Trafodion tables (each column in a row is a separate Cell),
> if the first column in the table can be defaulted, then uniqueness violations
> are not raised as expected for INSERT statements. Here is an example
> create table def1 (c1 int, c2 int not null, c3 int, primary key (c2))
> attributes hbase format;
> insert into def1 (c2) values (1);
> -- raises unique constraint error as expected
> insert into def1 (c2, c3) values (1,3);
> -- does not raise constraint error
> insert into def1 (c1, c2) values (1,1);
> The problem is that during the checkAndPut for INSERTcall we are specifying
> the column to be checked as the one that has index 0 in the row being
> inserted. This would the first column being inserted into for the row, as
> specified by DDL, once omitted columns are removed. Columns with default
> value could be omitted in a given INSERT, if they are not part of the
> clustering key.
> The fix utilizes that fact that clustering key columns are always present in
> the row being inserted, even if they can be defaulted and not explicitly in
> the INSERT statement. We now pass in the index of the first clustering key
> column, in the row being inserted, to the java layer. The java layer will get
> the column name/qualifier from the java byte buffer version of row being
> inserted and use it in the CheckAndPut call. Not that the index of first
> clustering key column will depend on both which default columns are being
> skipped and order of columns in DDL. This index does not depend only on DDL.
> With the change we get the expected error
> >>insert into def1 (c1, c2) values (1,1);
> *** ERROR[8102] The operation is prevented by a unique constraint.
> --- 0 row(s) inserted.
> >>insert into def1 (c2, c1) values (51,1);
> --- 1 row(s) inserted.
> >>insert into def1 (c2, c1) values (1,51);
> *** ERROR[8102] The operation is prevented by a unique constraint.
> --- 0 row(s) inserted.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)