[ 
https://issues.apache.org/jira/browse/HBASE-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12848539#action_12848539
 ] 

stack commented on HBASE-2353:
------------------------------

.bq Under what scenarios would you fail to update memstore? It seems to me that 
those scenarios necessitate a full RS stop. 

I suppose, i was thinking memstore update would fail because the RS had 
crashed/stopped.  Can't think of any reason we'd part fail.  Client wouldn't 
get a return code though edits had gone in because the bulk put had not 
completed (client would see an exception).

Then there is the case where we add N of the M edits to the WAL file before we 
hit some HDFS issue that forces us return to the client.  In this case, 
wouldn't you have to report the bulk put had completely failed since edits had 
no edits had made it to the MemStore?

It seems like you have to process the bulk put, row by row.



> HBASE-2283 removed bulk sync optimization for multi-row puts
> ------------------------------------------------------------
>
>                 Key: HBASE-2353
>                 URL: https://issues.apache.org/jira/browse/HBASE-2353
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: ryan rawson
>             Fix For: 0.21.0
>
>
> previously to HBASE-2283 we used to call flush/sync once per put(Put[]) call 
> (ie: batch of commits).  Now we do for every row.  
> This makes bulk uploads slower if you are using WAL.  Is there an acceptable 
> solution to achieve both safety and performance by bulk-sync'ing puts?  Or 
> would this not work in face of atomic guarantees?
> discuss!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to