dingshun3016 commented on PR #5170:
URL: https://github.com/apache/hadoop/pull/5170#issuecomment-1333710803
according to the situation discussed so far, it seems that there are several
ways to solve this problem
- remove the BLOCK_POOl level write lock in #addBlockPool
> but worry about having replica consistency problem
- forbid refresh() when ReplicaCachingGetSpaceUsed#init() at first time
> it will cause the value of dfsUsage to be 0 until the next time
refresh()
- use du or df command instead at first time
> du is very expensive and slow
> df is inaccurate when the disk sharing by other servers
reference
[HDFS-14313](https://issues.apache.org/jira/browse/HDFS-14313)
Now that, this case only happen when invoke addBlockPool() and
CachingGetSpaceUsed#used < 0, I have an idea, is it possible to add a switch,
not add lock when ReplicaCachingGetSpaceUsed#init() at first time , and add it
at other times
do you think it's possible?@MingXiangLi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]