[ 
https://issues.apache.org/jira/browse/HBASE-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clint Morgan updated HBASE-1880:
--------------------------------

    Attachment: 1880-v2.patch

Looks like putting in memstore and not flushing is bad. If we crash again, then 
we will loose the edits.

This adds a flush if we read any edits.

This fixes my failure tests, running hbase tests now.

> DeleteColumns are not recovered properly from the write-ahead-log
> -----------------------------------------------------------------
>
>                 Key: HBASE-1880
>                 URL: https://issues.apache.org/jira/browse/HBASE-1880
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 0.20.0, 0.20.1, 0.21.0
>            Reporter: Clint Morgan
>            Priority: Critical
>         Attachments: 1880-v2.patch, 1880.patch
>
>
> I found a couple of issues:
>  - The timestamp is being set to now after it has been written to the wal. So 
> if the WAL was flushed on that write, it gets in with ts of MAX_INT and is 
> effectively lost.
>  - Even after that fix, I had issues getting the delete to apply properly. In 
> my case, the WAL had a put to a column, then a DeleteColumn for the same 
> column. The DeleteColumn KV had a later timestamp, but it was still lost on 
> recovery. I traced around a bit, and it looks like the current approach of 
> just using an HFile.writer to write the set of KVs read from the log will not 
> work. There is special logic in MemStore for deletes that needs to happen 
> before writing. I got around this by just adding to memstore in the log 
> recovery process. Not sure if there are other implications of this.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to