[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit
[ https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16430759#comment-16430759 ] Weiwei Wu commented on HDFS-13380: -- I have test this patch in my cluster, rm and mv operation work fine, LGTM > RBF: mv/rm fail after the directory exceeded the quota limit > > > Key: HDFS-13380 > URL: https://issues.apache.org/jira/browse/HDFS-13380 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch > > > It's always fail when I try to mv/rm a directory which have exceeded the > quota limit. > {code:java} > [hadp@hadoop]$ hdfs dfsrouteradmin -ls > Mount Table Entries: > Source Destinations Owner Group Mode Quota/Usage > /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-] > [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: > The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > [hadp@hadoop]$ hdfs dfs -rm -skipTrash > hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > {code} > I think we should add a parameter for the method *getLocationsForPath,* to > determine if we need to perform quota verification on the operation. For > example mv src directory parameter and rm directory parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit
[ https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429961#comment-16429961 ] Weiwei Wu commented on HDFS-13380: -- Thanks for the patch, [~linyiqun], the patch is LGTM, i will test in my cluster later today. > RBF: mv/rm fail after the directory exceeded the quota limit > > > Key: HDFS-13380 > URL: https://issues.apache.org/jira/browse/HDFS-13380 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch > > > It's always fail when I try to mv/rm a directory which have exceeded the > quota limit. > {code:java} > [hadp@hadoop]$ hdfs dfsrouteradmin -ls > Mount Table Entries: > Source Destinations Owner Group Mode Quota/Usage > /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-] > [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: > The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > [hadp@hadoop]$ hdfs dfs -rm -skipTrash > hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > {code} > I think we should add a parameter for the method *getLocationsForPath,* to > determine if we need to perform quota verification on the operation. For > example mv src directory parameter and rm directory parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit
Weiwei Wu created HDFS-13380: Summary: RBF: mv/rm fail after the directory exceeded the quota limit Key: HDFS-13380 URL: https://issues.apache.org/jira/browse/HDFS-13380 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Weiwei Wu It's always fail when I try to mv/rm a directory which have exceeded the quota limit. {code:java} [hadp@hadoop]$ hdfs dfsrouteradmin -ls Mount Table Entries: Source Destinations Owner Group Mode Quota/Usage /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-] [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: The NameSpace quota (directories and files) is exceeded: quota=1200 file count=1201 [hadp@hadoop]$ hdfs dfs -rm -skipTrash hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file count=1201 {code} I think we should add a parameter for the method *getLocationsForPath,* to determine if we need to perform quota verification on the operation. For example mv src directory parameter and rm directory parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394400#comment-16394400 ] Weiwei Wu commented on HDFS-13248: -- Thanks for the patch, [~elgoiri]. Took a quick pass, #3 design LGTM. > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, clientMachine-call-path.jpeg, > debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003 ] Weiwei Wu edited comment on HDFS-13248 at 3/10/18 5:58 AM: --- [~elgoiri] Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg] The clientMachine's IP is get from Server property InetAddress addr. I think we can get client's IP from clientName, but I'm not sure how to decode clientname. was (Author: wuweiwei): Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg] The clientMachine's IP is get from Server property InetAddress addr. I think we can get client's IP from clientName, but I'm not sure how to decode clientname. > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, > debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003 ] Weiwei Wu edited comment on HDFS-13248 at 3/10/18 3:58 AM: --- Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg] The clientMachine's IP is get from Server property InetAddress addr. I think we can get client's IP from clientName, but I'm not sure how to decode clientname. was (Author: wuweiwei): Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg] The clientMachine's IP is get from Server property InetAddress addr. I think we can get client's IP from clintName, but I'm not sure how to decode clientname. > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, > debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003 ] Weiwei Wu edited comment on HDFS-13248 at 3/10/18 3:58 AM: --- Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg] The clientMachine's IP is get from Server property InetAddress addr. I think we can get client's IP from clintName, but I'm not sure how to decode clientname. was (Author: wuweiwei): Do you mean clientMachine's IP? The clientMachine's IP is get from Server property InetAddress addr. !clientMachine-call-path.jpeg! > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, > debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003 ] Weiwei Wu commented on HDFS-13248: -- Do you mean clientMachine's IP? The clientMachine's IP is get from Server property InetAddress addr. !clientMachine-call-path.jpeg! > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, > debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13248: - Attachment: clientMachine-call-path.jpeg > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, > debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13254) RBF: Cannot mv/cp file cross namespace
[ https://issues.apache.org/jira/browse/HDFS-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393992#comment-16393992 ] Weiwei Wu commented on HDFS-13254: -- [~ywskycn] Thank you for your detailed explanation. The two solutions you provided can indeed avoid this problem. [~elgoiri], I want the following scenario: Use RBF as a file system for users and provide the same API interface as the original HDFS. For example, give the Spark user a directory /spark with mount points /spark ==> 1->/spark /spark/pathA ==> 2->/spark /spark/pathB ==> 3->/spark The user does not need to know the mount ns corresponding to each directory. It only needs to change the access path in the original program to hdfs://ns-fed/ and the program can run. There are several advantages to this: 1. When the underlying ns change, the user program does not need to be modified. 2. The user program does not need to be modified when the user program is migrated to other rbf clusters. > RBF: Cannot mv/cp file cross namespace > -- > > Key: HDFS-13254 > URL: https://issues.apache.org/jira/browse/HDFS-13254 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > When I try to mv a file from a namespace to another, the client return an > error. > > Do we have any plan to support cp/mv file cross namespace? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13255) RBF: Fail when try to remove mount point paths
[ https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13255: - Description: when delete a ns-fed path which include mount point paths, will issue a error. Need to delete each mount point path independently. Operation step: {code:java} [hadp@root]$ hdfs dfsrouteradmin -ls Mount Table Entries: Source Destinations Owner Group Mode Quota/Usage /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ Found 2 items -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/ Found 2 items -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ Found 2 items -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/ rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using -skipTrash option [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/ rm: `hdfs://ns-fed/rm-test-all': Input/output error {code} was: when delete a ns-fed path which include mount point paths, will issue a error. Need to delete echo mount point path independently. Operation step: {code:java} [hadp@root]$ hdfs dfsrouteradmin -ls Mount Table Entries: Source Destinations Owner Group Mode Quota/Usage /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ Found 2 items -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/ Found 2 items -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ Found 2 items -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/ rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using -skipTrash option [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/ rm: `hdfs://ns-fed/rm-test-all': Input/output error {code} > RBF: Fail when try to remove mount point paths > -- > > Key: HDFS-13255 > URL: https://issues.apache.org/jira/browse/HDFS-13255 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > when delete a ns-fed path which include mount point paths, will issue a error. > Need to delete each mount point path independently. > Operation step: > {code:java} > [hadp@root]$ hdfs dfsrouteradmin -ls > Mount Table Entries: > Source Destinations Owner Group Mode Quota/Usage > /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, > SsQuota: -/-] > /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, > SsQuota: -/-] > [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ > Found 2 items > -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml > -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml > [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/ > Found 2 items > -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 > hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt > -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 > hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt > [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ > Found 2 items > -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml > -rw-r--r-- 3 hadp supergroup
[jira] [Created] (HDFS-13255) RBF: Fail when try to remove mount point paths
Weiwei Wu created HDFS-13255: Summary: RBF: Fail when try to remove mount point paths Key: HDFS-13255 URL: https://issues.apache.org/jira/browse/HDFS-13255 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Weiwei Wu when delete a ns-fed path which include mount point paths, will issue a error. Need to delete echo mount point path independently. Operation step: {code:java} [hadp@root]$ hdfs dfsrouteradmin -ls Mount Table Entries: Source Destinations Owner Group Mode Quota/Usage /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, SsQuota: -/-] [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ Found 2 items -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/ Found 2 items -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ Found 2 items -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/ rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using -skipTrash option [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/ rm: `hdfs://ns-fed/rm-test-all': Input/output error {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13254) RBF: can not mv/cp file cross namespace
Weiwei Wu created HDFS-13254: Summary: RBF: can not mv/cp file cross namespace Key: HDFS-13254 URL: https://issues.apache.org/jira/browse/HDFS-13254 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Weiwei Wu When I try to mv a file from a namespace to another, the client return an error. Do we have any plan to support cp/mv file cross namespace? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392747#comment-16392747 ] Weiwei Wu commented on HDFS-13248: -- [~elgoiri] I have debug the trunk in my test cluster today, below is some debug information: In [^debug-info-1.jpeg], namenode receive a create request, the clientMachine is get from RPC, and which is router-IP. clientMachine will save to file's FileUnderConstructionFeature, and will use to decide the block location when namenode receive the addBlock request(In [^debug-info-2.jpeg]). > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13248: - Attachment: debug-info-2.jpeg > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13248: - Attachment: debug-info-1.jpeg > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > Attachments: debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13248: - Attachment: (was: 531520593576_.pic_hd.jpg) > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13248) RBF: namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13248: - Attachment: 531520593576_.pic_hd.jpg > RBF: namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392430#comment-16392430 ] Weiwei Wu commented on HDFS-13212: -- Add a default name service id "0" in the TestMountTableResolver, patch uploaded. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Assignee: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-008.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Assignee: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13248) RBF: namenode need to choose block location for the client
Weiwei Wu created HDFS-13248: Summary: RBF: namenode need to choose block location for the client Key: HDFS-13248 URL: https://issues.apache.org/jira/browse/HDFS-13248 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Weiwei Wu When execute a put operation via router, the namenode will choose block location for the router, not for the real client. This will affect the file's locality. I think on both namennode and router, we should add a new addBlock method, or add a parameter for the current addBlock method, to pass the real client information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390881#comment-16390881 ] Weiwei Wu edited comment on HDFS-13212 at 3/8/18 7:54 AM: -- [~elgoiri], Thanks for your review.Update a new patch. was (Author: wuweiwei): [~elgoiri], Thanks for you review.Update a new patch. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch, HDFS-13212-007.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390881#comment-16390881 ] Weiwei Wu commented on HDFS-13212: -- [~elgoiri], Thanks for you review.Update a new patch. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch, HDFS-13212-007.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-007.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch, HDFS-13212-007.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390698#comment-16390698 ] Weiwei Wu commented on HDFS-13212: -- Add an assert to check the default location and fix the check style issues. But the TestRouterQuota is pass in my computer(pull code from trunk today). Let's see what Yetus says. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-006.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, > HDFS-13212-006.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16389132#comment-16389132 ] Weiwei Wu commented on HDFS-13212: -- Step 1: visit the PATH-A that not in the mount point In method MountTableResolver.LookupLocation, if a input source path (PATH-A) can not find a match mount point, it will return a default location(DEFAULT-LOCATION) with null sourcePath (below code line 393), and add it to the locationCache. {code:java} 382public PathLocation lookupLocation(final String path) { 383 PathLocation ret = null; 384 MountTable entry = findDeepest(path); 385 if (entry != null) { 386ret = buildLocation(path, entry); 387 } else { 388// Not found, use default location 389RemoteLocation remoteLocation = 390new RemoteLocation(defaultNameService, path); 391List locations = 392Collections.singletonList(remoteLocation); 393ret = new PathLocation(null, locations); // a location with null sourcePath 394 } 395 return ret; 396} {code} Step 2: add the PATH-A mount point when add a PATH-A mount point, router need to invalid the pre default location cache, otherwise the new add mount point will never work because locationCache will alway return the DEFAULT-LOCATION. invalidateLocationCache will lookup all locationCache the find the match sourcePath, so it will cause a null pointer exception in below code line 241. {code:java} 227private void invalidateLocationCache(final String path) { 228 LOG.debug("Invalidating {} from {}", path, locationCache); 229 if (locationCache.size() == 0) { 230return; 231 } 232 233 // Go through the entries and remove the ones from the path to invalidate 234 ConcurrentMap map = locationCache.asMap(); 235 Set> entries = map.entrySet(); 236 Iterator> it = entries.iterator(); 237 while (it.hasNext()) { 238Entry entry = it.next(); 239PathLocation loc = entry.getValue(); 240 String src = loc.getSourcePath(); 241 if (src.startsWith(path)) { 242 LOG.debug("Removing {}", src); 243 it.remove(); 244 } 245 } 246 247 LOG.debug("Location cache after invalidation: {}", locationCache); 248 } {code} This case is tested in below test code {code:java} +// Add the default location to location cache +mountTable.getDestinationForPath("/testlocationcache"); + +// Add the entry again but mount to another ns +Map map3 = getMountTableEntry("3", "/testlocationcache"); +MountTable entry3 = MountTable.newInstance("/testlocationcache", map3); +entries.add(entry3); +mountTable.refreshEntries(entries); + +// Ensure location cache update correctly +assertEquals("3->/testlocationcache/", +mountTable.getDestinationForPath("/testlocationcache").toString()); {code} > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-005.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16388933#comment-16388933 ] Weiwei Wu commented on HDFS-13212: -- Sorry for that FindBugs error. Just fix that and add a new patch. The MountTableResolver lookupLocation will add a default location when there is no fix mount point match, without this patch, this location cache will never be invalid until this mount point change again. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-004.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch, HDFS-13212-004.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387468#comment-16387468 ] Weiwei Wu commented on HDFS-13212: -- MountTableResolver will add a locationCache with null sourcePath, this will cause a null pointer exception when invalidateLocationCache try to remove it. Add some code to fix this bug. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-003.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, > HDFS-13212-003.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16383260#comment-16383260 ] Weiwei Wu edited comment on HDFS-13212 at 3/2/18 6:51 AM: -- [~elgoiri] [~linyiqun] Upload [^HDFS-13212-002.patch]with unit test. please review, thanks.:) was (Author: wuweiwei): [~elgoiri] [~linyiqun] Upload a new patch with unit test, please review, thanks.:) > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16383260#comment-16383260 ] Weiwei Wu commented on HDFS-13212: -- [~elgoiri] [~linyiqun] Upload a new patch with unit test, please review, thanks.:) > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Wu updated HDFS-13212: - Attachment: HDFS-13212-002.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16381651#comment-16381651 ] Weiwei Wu commented on HDFS-13212: -- [~maobaolong] The MountTableResolver periodically fetch mount table, clean the locationCaches which mount points has change(add / remove / modify operations after this patch). This will ensure consistency on both sides(router and state store), and no need to clean entire locationCache periodically. > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: Weiwei Wu >Priority: Major > Attachments: HDFS-13212-001.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] weiwei wu updated HDFS-13212: - Attachment: HDFS-13212-001.patch > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: weiwei wu >Priority: Major > Attachments: HDFS-13212-001.patch > > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] weiwei wu updated HDFS-13212: - Status: Patch Available (was: Open) > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: weiwei wu >Priority: Major > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue
[ https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] weiwei wu updated HDFS-13212: - Attachment: (was: HDFS-12615-001.patch) > RBF: Fix router location cache issue > > > Key: HDFS-13212 > URL: https://issues.apache.org/jira/browse/HDFS-13212 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Reporter: weiwei wu >Priority: Major > > The MountTableResolver refreshEntries function have a bug when add a new > mount table entry which already have location cache. The old location cache > will never be invalid until this mount point change again. > Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13212) RBF: Fix router location cache issue
weiwei wu created HDFS-13212: Summary: RBF: Fix router location cache issue Key: HDFS-13212 URL: https://issues.apache.org/jira/browse/HDFS-13212 Project: Hadoop HDFS Issue Type: Sub-task Components: federation, hdfs Reporter: weiwei wu Attachments: HDFS-12615-001.patch The MountTableResolver refreshEntries function have a bug when add a new mount table entry which already have location cache. The old location cache will never be invalid until this mount point change again. Need to invalid the location cache when add the mount table entries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org