> (i) It looks like make several calls to this.write.append() > which in turn does a bunch of individual out.write (to the DFSOutputStream), > as opposed to just one interaction with the underlying DFS. If so, how do we > guarantee that all the edits either make it to HDFS or not atomically? Or is > this just broken?
Yeah I thought about that too but I'm not sure how can we do a single DFS operation of any number KVs. > (ii) The updates to memstore should happen after the sync > rather than before, correct? Otherwise, there is the danger that the write to > DFS (sync fails for some reason) & we return an error to the client, but we > have already taken edits to the memstore. So subsequent reads could serve > uncommitted data. Indeed. The syncWal was taken back up in HRS as a way to optimize batch Puts but the fact it's called after all the MemStore operations is indeed a problem. I think we need to fix both (i) and (ii) by ensuring we do only a single append for whatever we have to put and then syncWAL once before processing the MemStore. But, the other problem here is that the row locks have to be taken out on all rows before everything else in the case of a Put[] else we aren't atomic. And then I think some checks are ran under HRegion that we would need to run before everything else. Quite a big change but I think it's needed. J-D