[
https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15655139#comment-15655139
]
Chen Liang commented on HDFS-11127:
-----------------------------------
Thanks [~anu] for the comments! Here are my thoughts
1. One thing with {{VolumeInfoProto}} is that it contains volume usage, and to
get this the server will need to request this information from underlying
storage layer (SCM in this case), which takes time. Another thing is that a
listing request may potentially return a very huge number of volumes. So I want
restrict this to a minimum set of information to avoid response message being
too huge.
2. That's a good point. 4KB block would have much better performance in our
case compared to 1K or 2K (because of less number of blocks), so we may indeed
end up using 4KB most (if not all) of the time. But I do want to leave the
possibility for a different block size for the time being. Just in case there
are use cases where a smaller block size is more desired. Will flag this and
leave it for further consideration at the moment (or comments from anyone else
with a strong preference).
> Block Storage : add block storage service protocol
> --------------------------------------------------
>
> Key: HDFS-11127
> URL: https://issues.apache.org/jira/browse/HDFS-11127
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs
> Reporter: Chen Liang
> Assignee: Chen Liang
> Attachments: HDFS-11127-HDFS-7240.001.patch
>
>
> This JIRA adds block service protocol. This protocol is expose to client for
> volume operations including create, delete, info and list. Note that this
> protocol has nothing to do with actual data read/write on a particular volume.
> (Also note that the term "cblock" is the current term used to refer to the
> block storage system.)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]