[
https://issues.apache.org/jira/browse/HDFS-17473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ZanderXu resolved HDFS-17473.
-----------------------------
Resolution: Fixed
> [FGL] Solutions for Quota
> -------------------------
>
> Key: HDFS-17473
> URL: https://issues.apache.org/jira/browse/HDFS-17473
> Project: Hadoop HDFS
> Issue Type: Task
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Major
>
> Concurrent operations on directory tree may cause Quota updates and
> verification to not be thread-safe.
> For example:
> # Supposing there is directory _/a/b_ and quota is on inode _a_ and _b_
> # There are some directories and files under {_}/a/b{_}, such as:
> {_}/a/b/c/d1{_}, _/a/b/d/f1.txt_
> # Supposing there is a create operation under _/a/b/c/d1_ and there is a
> addBlock operation on _/a/b/d/f1.txt_
> # These two operations can be handled concurrently by namenode
> # They will update the quota on inode a concurrently since these operations
> just hold the read lock of the inode _a_ and {_}b{_}.
> # so we should make quota-related thread safe.
>
> There are two solutions to make quota-related thread safe。
> Solution one: Hold the write lock of the first iNode with Quota set when
> resolvePath
> * Directly hold the write lock of iNode _a_ so that all operations involving
> subtree _/a_ can be handled safety.
> * Due to lower concurrency, maximum improvements cannot be achieved.
> * But the implementation is simple and straightforward.
> Solution two: Lock all QuotaFeatures during quota verification or update
> * Still hold the read lock of iNode a and b
> * Lock all QuotaFeatures involved in this operations, when validating or
> updating quotas.
> * Maximum improvements can be achieved.
> * But the implementation is a little complex
> ** Add a lock for each QuotaFeature
> ** Acquire locks for all involving QuotaFeature
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]