[
https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309222#comment-16309222
]
Uma Maheswara Rao G commented on HDFS-12911:
--------------------------------------------
Hi [~chris.douglas], Thank you so much for your time and helpful review.
{quote}
• readLockNS/readUnlockNS are sensitive APIs to expose to an
external service. If the SPS dies, stalls, or loses connectivity to the NN
without releasing the lock: what happens? Instead, would it be possible to
break up analyseBlocksStorageMovementsAndAssignToDN into operations that- if
any fail due to concurrent modification- could abort and retry? Alternatively
(and as in the naive HDFS-12911.00.patch), the server side of the RPC could be
"heavier", by taking the lock and running this code server-side. This would
counter many of the advantages to running it externally, unfortunately.
{quote}
I should have explained a bit about current APIs in Context. Actually all the
APIs in Context may not be needed once we refine the lock usage reduction in
SPS code. Rakesh worked on a patch to reduce lock and refining APIs to access
namesystem HDFS-12982 (he should upload a patch shortly). So, this lock APIs
from Context will go away.
Probably I will wait to refine this patch until HDFS-12982. Just don’t want to
mix the refining of lock thing with this refactoring patch.
{quote}
• getSPSEngine may leak a heavier abstraction than is necessary.
As used, it appears to check for liveness of the remote service, which can be a
single call (i.e., instead of checking that both the SPS and namesystem are
running, create a call that checks both in the context object), or it's used to
get a reference to another component. The context could return those components
directly, rather than returning the SPS. It would be cleaner still if the
underlying action could be behind the context API, so the caller isn't managing
multiple abstractions.
{quote}
Good point. I agree to move the SPS engine part completely behind the Context
and expose only required stuff outside. Actually, I was not clear (until today)
on how we wanted to establish client to external SPS communication and hence
internal, I did left out startup and queueing logics as is in BlockManager
which needs SPS instance to operate . I can refine it to push completely behind
Context abstraction.
Thanks for the comment.
{quote}
• The BlockMoveTaskHandler and FileIDCollector abstract the
"write" (scheduling moves) and "read" (scan namespace) interfaces?
{quote}
Yes
{quote}
• addSPSHint has no callers in this patch? Is this future
functionality, or only here for symmetry with removeSPSHint?
{quote}
I thought this may be useful if external SPS wants to implement some other
mechanism to track SPS calls instead of Xattr. Internal SPS calls Xattrs in
block manager itself, so it is not using it now.
{quote}
• Are these calls idempotent? Or would an external provider need
to resync with the NN on RPC timeouts?
{quote}
I think setting Xattrs are non idempotent. I think SPS can use hdfs client
exposed set Xattr APIs. There we had retry cache implementation which tries to
handle retries.
Even if we go with alternative implementations, I feel External provider need
to resend with NN on RPC timeouts. All may not be idempotent. But majority of
APIs would be just getters, so they should be idempotent IMO. Once we finalize
the APIs we need to mark them.
{quote}
• Since they're stubs in this patch, could we move the external
implementation to a separate JIRA? Just to keep the patch smaller.
{quote}
Sure. Why I included that classes in this first version to make the idea more
clearer for reviewers. Surely I will separate that classes from this patch once
approaches are agreed and clear. Seems we are clear on the approaches mostly, I
will remove in next version.
> [SPS]: Fix review comments from discussions in HDFS-10285
> ---------------------------------------------------------
>
> Key: HDFS-12911
> URL: https://issues.apache.org/jira/browse/HDFS-12911
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode, namenode
> Reporter: Uma Maheswara Rao G
> Assignee: Rakesh R
> Attachments: HDFS-12911-HDFS-10285-01.patch,
> HDFS-12911-HDFS-10285-02.patch, HDFS-12911.00.patch
>
>
> This is the JIRA for tracking the possible improvements or issues discussed
> in main JIRA
> So far comments to handle
> Daryn:
> # Lock should not kept while executing placement policy.
> # While starting up the NN, SPS Xattrs checks happen even if feature
> disabled. This could potentially impact the startup speed.
> UMA:
> # I am adding one more possible improvement to reduce Xattr objects
> significantly.
> SPS Xattr is constant object. So, we create one Xattr deduplication object
> once statically and use the same object reference when required to add SPS
> Xattr to Inode. So, here additional bytes required for storing SPS Xattr
> would turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr
> overhead should come down significantly IMO. Lets explore the feasibility on
> this option.
> Xattr list Future will not be specially created for SPS, that list would have
> been created by SetStoragePolicy already on the same directory. So, no extra
> Feature creation because of SPS alone.
> # Currently SPS putting long id objects in Q for tracking SPS called Inodes.
> So, it is additional created and size of it would be (obj ref + value) = (8 +
> 8) bytes [ ignoring alignment for time being]
> So, the possible improvement here is, instead of creating new Long obj, we
> can keep existing inode object for tracking. Advantage is, Inode object
> already maintained in NN, so no new object creation is needed. So, we just
> need to maintain one obj ref. Above two points should significantly reduce
> the memory requirements of SPS. So, for SPS call: 8bytes for called inode
> tracking + 8 bytes for Xattr ref.
> # Use LightWeightLinkedSet instead of using LinkedList for from Q. This will
> reduce unnecessary Node creations inside LinkedList.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]