[ https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16659617#comment-16659617 ]
Anu Engineer commented on HDDS-676: ----------------------------------- [~shashikant] The changes look much better, thanks for separating this into 2 different patches. Some very minor comments below: *testPutKeyAndGetKey* 1. {{long currentTime = Time.now();}} // Unused variable. 2. {{OzoneKeyDetails keyDetails = (OzoneKeyDetails) bucket.getKey(keyName);}} //Casting is not needed 3. It might be a good idea to verify the exception string. {{catch (Exception e)}} *XcieverClientGrpc.java* {{responseProto = sendCommandAsync(request, dn).get();}} I agree with this premise; that is we only talk to next data node if we get a failure on the first data node. If that is the case, do we need all this Async framework changes, hash tables etc? > Enable Read from open Containers via Standalone Protocol > -------------------------------------------------------- > > Key: HDDS-676 > URL: https://issues.apache.org/jira/browse/HDDS-676 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Shashikant Banerjee > Assignee: Shashikant Banerjee > Priority: Major > Attachments: HDDS-676.001.patch, HDDS-676.002.patch, > HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, > HDDS-676.006.patch, HDDS-676.007.patch > > > With BlockCommitSequenceId getting updated per block commit on open > containers in OM as well datanode, Ozone Client reads can through Standalone > protocol not necessarily requiring Ratis. Client should verify the BCSID of > the container which has the data block , which should always be greater than > or equal to the BCSID of the block to be read and the existing block BCSID > should exactly match that of the block to be read. As a part of this, Client > can try to read from a replica with a supplied BCSID and failover to the next > one in case the block does ont exist on one replica. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org