[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-13 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720971#comment-16720971
 ] 

Yiqun Lin edited comment on HDFS-13443 at 12/14/18 6:10 AM:


{quote}After expireAfterWrite time cache entries are marked removed but 
actually not removed. Scheduler is added to forcefully remove entries after 
expiry time so that connection can be closed.
{quote}
Expired entries can not only be removed by manually calling cleanup function 
but also can be evicted on each cache modification, on occasional cache 
accesses . Please see the javadoc of {{CacheBuilder}}.
{quote}If expireAfterWrite or expireAfterAccess is requested entries may be 
evicted on each cache modification, on occasional cache accesses, or on calls 
to Cache.cleanUp. Expired entries may be counted in Cache.size, but will never 
be visible to read or write operations.
{quote}
So this will be a delayed-removal.


was (Author: linyiqun):
{quote}After expireAfterWrite time cache entries are marked removed but 
actually not removed. Scheduler is added to forcefully remove entries after 
expiry time so that connection can be closed.
{quote}
Expired entries can not only be removed by manually calling cleanup function 
but also can be evicted on each cache modification, on occasional cache 
accesses . Please see the javadoc of {{CacheBuilder}}.
{quote}If expireAfterWrite or expireAfterAccess is requested entries may be 
evicted on each cache modification, on occasional cache accesses, or on calls 
to Cache.cleanUp. Expired entries may be counted in Cache.size, but will never 
be visible to read or write operations.
{quote}

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-13 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720910#comment-16720910
 ] 

Yiqun Lin edited comment on HDFS-13443 at 12/14/18 4:53 AM:


Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. 
Minor comments:
{code}+ } else {
 + LOG.warn("Service {} not enabled: depenendent services not enabled.",
 + MountTableRefresherService.class.getSimpleName());
 + }
{code}
I prefer to log which depenendent service here like {{router admin service or 
state store service is not enabled.}}

{code}
+/*
+ * When cleanUp() method is called, expired RouterClient will be removed 
and
+ * closed.
+ */
+clientCacheCleanerScheduler.scheduleWithFixedDelay(
+() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime,
+routerClientMaxLiveTime, TimeUnit.MILLISECONDS);
{code}
I don't think we really need a scheduler thread to clean up the expired router 
client. This will be done inside LoadingCache since we have set 
{{expireAfterWrite}} for this.
 


was (Author: linyiqun):
Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. 
minor comments:
{quote}+ } else {
 + LOG.warn("Service {} not enabled: depenendent services not enabled.",
 + MountTableRefresherService.class.getSimpleName());
 + }
{quote}
I prefer to log which depenendent service here like {{router admin service or 
state store service is not enabled.}}

{quote}
+/*
+ * When cleanUp() method is called, expired RouterClient will be removed 
and
+ * closed.
+ */
+clientCacheCleanerScheduler.scheduleWithFixedDelay(
+() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime,
+routerClientMaxLiveTime, TimeUnit.MILLISECONDS);
{quote}
I don't think we really need a scheduler thread to clean up the expired router 
client. This will be done inside LoadingCache since we have set 
{{expireAfterWrite}} for this.
 

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-13 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720910#comment-16720910
 ] 

Yiqun Lin edited comment on HDFS-13443 at 12/14/18 4:51 AM:


Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. 
minor comments:
{quote}+ } else {
 + LOG.warn("Service {} not enabled: depenendent services not enabled.",
 + MountTableRefresherService.class.getSimpleName());
 + }
{quote}
I prefer to log which depenendent service here like {{router admin service or 
state store service is not enabled.}}

{quote}
+/*
+ * When cleanUp() method is called, expired RouterClient will be removed 
and
+ * closed.
+ */
+clientCacheCleanerScheduler.scheduleWithFixedDelay(
+() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime,
+routerClientMaxLiveTime, TimeUnit.MILLISECONDS);
{quote}
I don't think we really need a scheduler thread to clean up the expired router 
client. This will be done inside LoadingCache since we have set 
{{expireAfterWrite}} for this.
 


was (Author: linyiqun):
Thanks [~arshad.mohammad] for updating the patch! Almost looks good to me. 
minor comments:

{quote}
+  } else {
+LOG.warn("Service {} not enabled: depenendent services not enabled.",
+MountTableRefresherService.class.getSimpleName());
+  }
{quote}
I prefer to log the which depenendent service here like {{router admin service 
or state store service is not enabled.}}


> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-07 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712698#comment-16712698
 ] 

Yiqun Lin edited comment on HDFS-13443 at 12/7/18 11:39 AM:


Revisiting this JIRA, I am thinking one thing: The refresh API will only makes 
sense in the shared State Store type not local type store. right?
Please attach the updated rebased on HDFS-13891 branch, current patch is for 
trunk.


was (Author: linyiqun):
Revisiting this JIRA, I am thinking one thing: The refresh API will only makes 
sense in the shared State Store type not local type store. Should we add a note 
for this?
Please attach the updated rebased on HDFS-13891 branch, current patch is for 
trunk.

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-07 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712571#comment-16712571
 ] 

Surendra Singh Lilhore edited comment on HDFS-13443 at 12/7/18 9:56 AM:


Thanks [~arshad.mohammad] for the patch.

Latest patch LGTM.

minor nits
{quote}+| dfs.federation.router.mount-table.cache.update | false | If true, 
Mount table cache is updated whenever a mount table entry is added, modified or 
removed. |
{quote}
    I feel default value for this property should be true and description 
should say, "*for all the routers*"


was (Author: surendrasingh):
Thanks [~arshad.mohammad] for the patch.

Latest path LGTM.

minor nits
{quote}+| dfs.federation.router.mount-table.cache.update | false | If true, 
Mount table cache is updated whenever a mount table entry is added, modified or 
removed. |
{quote}
    I feel default value for this property should be true and description 
should say, "*for all the routers*"

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463660#comment-16463660
 ] 

Yiqun Lin edited comment on HDFS-13443 at 5/4/18 10:17 AM:
---

Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment for the mapping entry ?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
'succesCount={},failureCount={}'. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.


was (Author: linyiqun):
Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount=}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463660#comment-16463660
 ] 

Yiqun Lin edited comment on HDFS-13443 at 5/4/18 10:16 AM:
---

Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount=}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.


was (Author: linyiqun):
Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line91: Can we use {{NetUtils.getHostPortString}} to replace 
{{StateStoreUtils.getHostPortString}}?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount={}}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.

*RouterHeartbeatService.java*
Line98:  Can we use {{NetUtils.getHostPortString}} here?


> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462888#comment-16462888
 ] 

Íñigo Goiri edited comment on HDFS-13443 at 5/3/18 6:19 PM:


[^HDFS-13443.007.patch] LGTM and the failed unit tests are unrelated.
+1 from my side.

Reviews from others appreciated.



was (Author: elgoiri):
[^HDFS-13443.007.patch] LGTM and the failed unit tests are unrelated.
+1 from my side.

Reviews form others appreciated.


> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448477#comment-16448477
 ] 

Íñigo Goiri edited comment on HDFS-13443 at 4/24/18 7:07 PM:
-

Thanks [~arshad.mohammad] for  [^HDFS-13443.003.patch].
A couple comments:
# Most of my confusion with what's local and remote comes form the fact that 
the methods for local and remote have the same name; for readability, I would 
make one of them different. Not sure if the local or the remote though.
# The new times in RBFConfigKeys should be defined as TimeUnit and we should 
use millis or seconds.
# 30 minutes for the connection seems a little high, 5 minutes (even 1) should 
be more than enough.
# I think I've seen something like getHostPortString in other places, is there 
a library we can use? Not sure StateStoreUtils is the place either as this is 
not the state store at all.
# Use {{new ArrayList<>()}} instead of {{new 
ArrayList()}}.
# For routerClientsCache, you can add a creator of connections so you don't 
need to do the check yourself.
# It looks like we can have cases where we cannot get the client to the Router 
(MountTableRefreshService#168), we should return which Routers we have updated 
succesfully and which ones are failed.
# Typo {{logRestult}}
# Why do we need to specifically use ZK in the unit test? I would leave it to 
whatever the default one is. We could potentially reuse some of the other unit 
tests that have a full subcluster with admin. Ideally some of the mount table 
related ones.
# Use Time.monotonicNow() instead of System.currentTimeMillis().


was (Author: elgoiri):
Thanks [~arshad.mohammad] for  [^HDFS-13443.003.patch].
A couple comments:
* Most of my confusion with what's local and remote comes form the fact that 
the methods for local and remote have the same name; for readability, I would 
make one of them different. Not sure if the local or the remote though.
* The new times in RBFConfigKeys should be defined as TimeUnit and we should 
use millis or seconds.
* 30 minutes for the connection seems a little high, 5 minutes (even 1) should 
be more than enough.
* I think I've seen something like getHostPortString in other places, is there 
a library we can use? Not sure StateStoreUtils is the place either as this is 
not the state store at all.
* Use {{new ArrayList<>()}} instead of {{new 
ArrayList()}}.
* For routerClientsCache, you can add a creator of connections so you don't 
need to do the check yourself.
* It looks like we can have cases where we cannot get the client to the Router 
(MountTableRefreshService#168), we should return which Routers we have updated 
succesfully and which ones are failed.
* Typo {{logRestult}}
* Why do we need to specifically use ZK in the unit test? I would leave it to 
whatever the default one is. We could potentially reuse some of the other unit 
tests that have a full subcluster with admin. Ideally some of the mount table 
related ones.
* Use Time.monotonicNow() instead of System.currentTimeMillis().

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org