Hadoop QA commented on PHOENIX-5795:

{color:red}-1 overall{color}.  Here are the results of testing the latest 
  against 4.x-HBase-1.5 branch at commit 
  ATTACHMENT ID: 12997641

    {color:green}+1 @author{color}.  The patch does not contain any @author 

    {color:green}+1 tests included{color}.  The patch appears to include 25 new 
or modified tests.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
    +diff --git 
+diff --git 
++     * IndexMaintainer.getIndexedColumns() returns the data column references 
for indexed columns. The data columns are
++     * grouped into three classes, pk columns (data table pk columns), the 
indexed columns (the columns for which
++     * we want to have indexing; they form the prefix for the primary key for 
the index table (after salt and tenant id))
++     * and covered columns. The purpose of this method is to find out if all 
the indexed columns are included in the
+     private boolean hasAllIndexedColumns(IndexMaintainer indexMaintainer, 
MultiMutation multiMutation) {
+-                        Bytes.compareTo(CellUtil.cloneQualifier(cell), 
columnReference.getQualifier() ) == 0) {
+-                                           BatchMutateContext context, long 
now, PhoenixIndexMetaData indexMetaData)

     {color:red}-1 core tests{color}.  The patch failed these unit tests:

Test results: 
Release audit warnings: 
Console output: 

This message is automatically generated.

> Supporting selective queries for index rows updated concurrently
> ----------------------------------------------------------------
>                 Key: PHOENIX-5795
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5795
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: Kadir OZDEMIR
>            Assignee: Kadir OZDEMIR
>            Priority: Critical
>         Attachments: PHOENIX-5795.4.x-HBase-1.5.001.patch, 
> PHOENIX-5795.4.x-HBase-1.5.002.patch
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
> From the consistent indexing design (PHOENIX-5156) perspective, two or more 
> pending updates from different batches on the same data row are concurrent if 
> and only if for all of these updates the data table row state is read from 
> HBase under the row lock and for none of them the row lock has been acquired 
> the second time for updating the data table. In other words, all of them are 
> in the first update phase concurrently. For concurrent updates, the first two 
> update phases are done but the last update phase is skipped. This means the 
> data table row will be updated by these updates but the corresponding index 
> table rows will be left with the unverified status. Then, the read repair 
> process will repair these unverified index rows during scans.
> In addition to leaving index rows unverified, the concurrent updates may 
> generate index row with incorrect row keys. For example, consider that an 
> application issues the verify first two upserts on the same row concurrently 
> and the second update does not include one or more of the indexed columns. 
> When these updates arrive concurrently to IndexRegionObserver, the existing 
> row state would be null for both of these updates. This mean the index 
> updates will be generated solely from the pending updates. The partial upsert 
> with missing indexed columns will generate an index row by assuming missing 
> indexed columns have null value, and this assumption may not true as the 
> other concurrent upsert may have non-null values for indexed columns. After 
> issuing the concurrent update, if the application attempts to read back the 
> row using a selective query on the index table and this selective query maps 
> to an HBase scan that does not scan these unverified rows due to incorrect 
> row keys on these rows, the application will not get the row content back 
> correctly.

This message was sent by Atlassian Jira

Reply via email to