[
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922937#comment-16922937
]
Anu Engineer commented on HDFS-14703:
-------------------------------------
[~shv] We ([~arp] , [~xyao] , [~jojochuang] , [~szetszwo] ) were looking at the
patch, as well as the document and came across some questions that we were not
able to answer. I have been tasked with asking these.
# The Block Partition - We understand that you are proposing the block
partitions be divided into GSets that match the Inode partition. What we could
not puzzle out was how to handle block reports? One of the suggestions we came
up with was the in the initial parts of the work, we leave the block map as a
single monolith. It would be interesting to hear how you plan to partition the
block map, especially when the block reports are involved.
# The locks in the Range Map Lock and Range Set lock– It is not very clear
what the semantics would be, if I hold a Range Map lock, does it mean that I
can operate safely? what happens to the Range Set Locks? Do I need to make sure
that all users of RangeSet has released the locks ? and if I am holding the
Range Map lock, no other thread will be able to enter? is it possible that
Range Map lock might have to wait a really long time for the Range Set locks to
be released ?
> NameNode Fine-Grained Locking via Metadata Partitioning
> -------------------------------------------------------
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs, namenode
> Reporter: Konstantin Shvachko
> Priority: Major
> Attachments: 001-partitioned-inodeMap-POC.tar.gz, NameNode
> Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace
> into multiple partitions each having a separate lock. Intended to improve
> performance of NameNode write operations.
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]