[ 
https://issues.apache.org/jira/browse/HBASE-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13128008#comment-13128008
 ] 

Todd Lipcon commented on HBASE-4570:
------------------------------------

Jon and I spent the afternoon with his test cases. We've found the issue - it's 
a nice one!

In KeyValue, we have the following code:
{code}
  public byte [] getRow() {
    if (rowCache == null) {
      int o = getRowOffset();
      short l = getRowLength();
      rowCache = new byte[l];
      System.arraycopy(getBuffer(), o, rowCache, 0, l);
    }
    return rowCache;
  }
{code}
which is called extensively by KeyValueHeaps throughout the scanner code. In 
the case of scanning MemStore, an individual KeyValue ends up as {{next}} in 
multiple MemStoreScanners. Then, if multiple threads call {{getRow}} at the 
same time, we see the following race:
- Thread 1 sees {{rowCache}} as null, and initializes {{rowCache = new 
byte[...]}}
- Thread 2 sees {{rowCache}} as non-null, and returns a byte array of all 0s
- Thread 1 initializes the row with {{arrayCopy}}, and returns the right result

The byte array returned to Thread 2 is modified while it's working with it, so 
depending on the interleaving of events, it can cause an invalid heap, or 
invalid results, or a weird split row like Jon was seeing, etc.

The fix is pretty simple - we need to declare {{rowCache}} volatile, and 
initialize it in a temporary variable before overwriting the volatile 
reference. If this is too slow, we could use an AtomicFieldUpdater with 
{{lazySet}} to put the cost only on the write side, but I don't think it really 
matters.

                
> Scan ACID problem with concurrent puts.
> ---------------------------------------
>
>                 Key: HBASE-4570
>                 URL: https://issues.apache.org/jira/browse/HBASE-4570
>             Project: HBase
>          Issue Type: Bug
>          Components: client, regionserver
>    Affects Versions: 0.90.1, 0.90.3
>            Reporter: Jonathan Hsieh
>         Attachments: 4570-instrumentation.tgz, hbase-4570.tgz
>
>
> When scanning a table sometimes rows that have multiple column families get 
> split into two rows if there are concurrent writes.  In this particular case 
> we are overwriting the contents of a Get directly back onto itself as a Put.
> For example, this is a two cf row (with "f1", "f2", .. "f9" cfs).  It is 
> actually returned as two rows (#55 and #56). Interestingly if the two were 
> merged we would have a single proper row.
> Row row0000024461 had time stamps: [55: 
> keyvalues={row0000024461/f0:data/1318200440867/Put/vlen=1000, 
> row0000024461/f0:qual/1318200440867/Put/vlen=10, 
> row0000024461/f1:data/1318200440867/Put/vlen=1000, 
> row0000024461/f1:qual/1318200440867/Put/vlen=10, 
> row0000024461/f2:data/1318200440867/Put/vlen=1000, 
> row0000024461/f2:qual/1318200440867/Put/vlen=10, 
> row0000024461/f3:data/1318200440867/Put/vlen=1000, 
> row0000024461/f3:qual/1318200440867/Put/vlen=10, 
> row0000024461/f4:data/1318200440867/Put/vlen=1000, 
> row0000024461/f4:qual/1318200440867/Put/vlen=10}, 
> 56: keyvalues={row0000024461/f5:data/1318200440867/Put/vlen=1000, 
> row0000024461/f5:qual/1318200440867/Put/vlen=10, 
> row0000024461/f6:data/1318200440867/Put/vlen=1000, 
> row0000024461/f6:qual/1318200440867/Put/vlen=10, 
> row0000024461/f7:data/1318200440867/Put/vlen=1000, 
> row0000024461/f7:qual/1318200440867/Put/vlen=10, 
> row0000024461/f8:data/1318200440867/Put/vlen=1000, 
> row0000024461/f8:qual/1318200440867/Put/vlen=10, 
> row0000024461/f9:data/1318200440867/Put/vlen=1000, 
> row0000024461/f9:qual/1318200440867/Put/vlen=10}]
> I've only tested this on 0.90.1+patches and 0.90.3+patches, but it is 
> consistent and duplicatable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to