[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16184960#comment-16184960
 ] 

Xiaoyu Yao commented on HDFS-12543:
-----------------------------------

Thanks [~vagarychen] for working on this and posting the patch. The patch looks 
pretty good to me overall. Here are a few comments.

ChunkgroupOutputStream.java
Line 112/114: one of the duplicate setKeyName can be removed
 
Line 126:  do we need to check and ensure all the containers of the key is 
created before any write? 
Will this cause problem if someone create a key with large size like 1TB 
specified?
 
Line 171/214: we should never have ksmCient ==null? Should we change to use 
Precondition check inside allocateNewBlock()?
 
Line 175/218: the exception message does not seem to match the code where we 
have ksmCLient==null. 
Do you want to handle the exception that might throw from allocateNewBlock?
 
KsmKeyArgs.java

NIT: line 20 unnecessary extra line change
 
OpenKeyHandler.java

NIT: this class can be renamed to "OpenKeySession" especially it is returned 
from OpenKey()
 
KSMMetrics.java

Line 224: I don’t' think we need to incr the number of KeyOps for block 
allocate.
 
KSMMetadataManager.java

Line 175-177: can we have take a String keyName to avoid converting keyName 
from bytes to string and string to baytes.
You may use getKeyKeyPrefix() to get a string version of key name.
 
KeySpaceManagerProtocol.java

Line 133: can you update the javadoc
Line 137: NIT: typo in "isntance"
 
KeyManagerImpl.java

Line 168-173: that's a debatable optimization. Preallocate slow the first 
request, which cause long delay for large key creation. If we really want this, 
we need a limit to the number
Of block/container preallocated.
 
Line 209-219: why do we need a 10000 loop retry here? Is it to avoid conflict 
due to openKey generated based on random id?
 
Line 239: How do we handle open keys that never got committed? Please file 
follow up JIRAs for this if we plan to handle that later.
 
Line 255-256: we should use a WriteBatch here for the put/delete db ops?

RpcClient.java

Line 267: The change is unnecessary. It was fixed lately due to Jenkins warning 
on unnecessary boxing/unboxing.
 
TestMultipleContainerReadWrite.java

Can you elaborate why testErrorWrite is removed? Do we expect write over the 
specified size allowed after this change? We could disable the test and open 
follow up JIRAs if it cannot be easily fixed.


> Ozone : allow create key without specifying size
> ------------------------------------------------
>
>                 Key: HDFS-12543
>                 URL: https://issues.apache.org/jira/browse/HDFS-12543
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Chen Liang
>            Assignee: Chen Liang
>              Labels: ozoneMerge
>         Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to