[
https://issues.apache.org/jira/browse/HDFS-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316022#comment-16316022
]
Mukul Kumar Singh commented on HDFS-12853:
------------------------------------------
If we don't have the synchronization mentioned the above, it seems that the
patch won't work?
bq. Yes, thats correct, without the synchronization, the fix will not work.
However the following lines in the latest patch adds the syncronization
{code}
if (requestProto.getCmdType() == ContainerProtos.Type.WriteChunk) {
WriteChunkRequestProto write = requestProto.getWriteChunk();
CompletableFuture<Message> stateMachineFuture =
writeChunkMap.remove(write.getChunkData().getChunkName());
return stateMachineFuture
.thenComposeAsync(v ->
CompletableFuture.completedFuture(runCommand(requestProto)));
{code}
> Ozone: Optimize chunk writes for Ratis by avoiding double writes
> ----------------------------------------------------------------
>
> Key: HDFS-12853
> URL: https://issues.apache.org/jira/browse/HDFS-12853
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ozone
> Affects Versions: HDFS-7240
> Reporter: Mukul Kumar Singh
> Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12853-HDFS-7240.001.patch,
> HDFS-12853-HDFS-7240.002.patch
>
>
> Ozone, in replicated mode writes the data twice, once to the raft log and
> then to the statemachine.
> This results in the data being written twice during a particular chunk write,
> this is subobtimal. With RATIS-122 the statemachine in Ozone can be optimized
> by writing the data to the statemachine only once.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]