[ 
https://issues.apache.org/jira/browse/HDDS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16540887#comment-16540887
 ] 

Anu Engineer commented on HDDS-226:
-----------------------------------

Looks good overall. Some very minor comments.

* ChunkGroupOutPutStream.java: Looks like your editor did a replacement, could 
we please revert it? 
{{import org.apache.hadoop.ozone.om.helpers.*;}}

* OmBlockInfo.java==> OzoneBlockInfo.java, it is easier to read Ozone than Om. 
Another orthogonal Question: Sorry for these random comments. I see we have 
BlockID class then we create a new class called OmBlockInfo and add one more 
field, blockLength. Why is this not added as part of BlockID. I am presuming we 
have some strong reason for not adding this field in BlockID.

* updateBlockLength():
I am very confused with this code. Can you please check? Why are checking 
keyArgs, did you intend to check blockInfoList?
{code}
    if (keyArgs != null) {
      OmBlockInfo blockInfo = blockInfoList.get(index);
      long originalLength = blockInfo.getBlockLength();
      blockInfo.setBlockLength(originalLength + length);
    }
  }
{code}

* Nit: updateBlockLength --> rename to incrementBlockLength?
* OmBlockInfo.java: Nit: Unused import.
* OmKeysArgs.java: More of a question, when would this sum be not equal to the 
datasize ? 
{code}
  public long getDataSize() {
    if (blockInfoList == null) {
      return dataSize;
    } else {
      return blockInfoList.parallelStream().mapToLong(e -> e.getBlockLength())
          .sum();
    }
}
{code}

* nit:validateBlockLengthWithCommitKey -> testvalidateBlockLengthWithCommitKey

* in the test:
String value = "sample value";  Replace with 
String value = RandomStringUtils.random(RandomUtils.nextInt(0,1024));

> Client should update block length in OM while committing the key
> ----------------------------------------------------------------
>
>                 Key: HDDS-226
>                 URL: https://issues.apache.org/jira/browse/HDDS-226
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Manager
>            Reporter: Mukul Kumar Singh
>            Assignee: Shashikant Banerjee
>            Priority: Major
>             Fix For: 0.2.1
>
>         Attachments: HDDS-226.00.patch, HDDS-226.01.patch
>
>
> Currently the client allocate a key of size with SCM block size, however a 
> client can always write smaller amount of data and close the key. The block 
> length in this case should be updated on OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to