[jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-01-08 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737829#comment-16737829
 ] 

venkata ram kumar ch commented on HDFS-14117:
-

Thanks [~elgoiri] for reviewing the patch. I will make changes based on your 
comments and i will upload the patch soon.

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117.001.patch, HDFS-14117.002.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-01-08 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Attachment: HDFS-14117.002.patch

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117.001.patch, HDFS-14117.002.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-01-08 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Status: Patch Available  (was: Open)

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117.001.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-01-08 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16736922#comment-16736922
 ] 

venkata ram kumar ch commented on HDFS-14117:
-

Added the initial patch. Please check it once.

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117.001.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-01-08 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Attachment: HDFS-14117.001.patch

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117.001.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-01-08 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14117:
---

Assignee: venkata ram kumar ch  (was: Surendra Singh Lilhore)

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-12-14 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13839:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13839-001.patch
>
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14143) RBF : after clrQuota mount point is not allowing to create new files

2018-12-12 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14143:
---

Assignee: venkata ram kumar ch

> RBF : after clrQuota mount point is not allowing to create new files 
> -
>
> Key: HDFS-14143
> URL: https://issues.apache.org/jira/browse/HDFS-14143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3
> Successfully set quota for mount point /src10
> bin> ./hdfs dfsrouteradmin -clrQuota /src10
> Successfully clear quota for mount point /src10
> bin> ./hdfs dfs -put harsha /dest10/file1
> bin> ./hdfs dfs -put harsha /dest10/file2
> bin> ./hdfs dfs -put harsha /dest10/file3
> put: The NameSpace quota (directories and files) of directory /dest10 is 
> exceeded: quota=3 file count=4
> bin> ./hdfs dfsrouteradmin -ls /src10
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /src10hacluster->/dest10hdfs  
> hadooprwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> bin>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14143) RBF : after clrQuota mount point is not allowing to create new files

2018-12-12 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14143:
---

Assignee: (was: venkata ram kumar ch)

> RBF : after clrQuota mount point is not allowing to create new files 
> -
>
> Key: HDFS-14143
> URL: https://issues.apache.org/jira/browse/HDFS-14143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3
> Successfully set quota for mount point /src10
> bin> ./hdfs dfsrouteradmin -clrQuota /src10
> Successfully clear quota for mount point /src10
> bin> ./hdfs dfs -put harsha /dest10/file1
> bin> ./hdfs dfs -put harsha /dest10/file2
> bin> ./hdfs dfs -put harsha /dest10/file3
> put: The NameSpace quota (directories and files) of directory /dest10 is 
> exceeded: quota=3 file count=4
> bin> ./hdfs dfsrouteradmin -ls /src10
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /src10hacluster->/dest10hdfs  
> hadooprwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> bin>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}
 

 

 


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir 

[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Labels: RBF  (was: )

> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Labels: RBF
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd

3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>   

[jira] [Assigned] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14117:
---

Assignee: Surendra Singh Lilhore  (was: venkata ram kumar ch)

> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 
{noformat}
1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
{noformat}
 

 

 

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 

1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd

 

 

 


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of 

[jira] [Updated] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-14117:

Description: 
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
|/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
-/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
|/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
-/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|

 

commands: 

1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd

2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.

3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://router/test/hdfs.cmd' to trash at: 
hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd

 

 

 

  was:
When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.


> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
>  
> commands: 
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 3./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your 

[jira] [Created] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-14117:
---

 Summary: RBF:We can only delete the files or dirs of one 
subcluster in a cluster with multiple subclusters when trash is enabled
 Key: HDFS-14117
 URL: https://issues.apache.org/jira/browse/HDFS-14117
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch


When we delete files or dirs in hdfs, it will move the deleted files or dirs to 
trash by default.

But in the global path we can only mount one trash dir /user. So we mount trash 
dir /user of the subcluster ns1 to the global path /user. Then we can delete 
files or dirs of ns1, but when we delete the files or dirs of another 
subcluser, such as hacluster, it will be failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14117) RBF:We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2018-11-29 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14117:
---

Assignee: venkata ram kumar ch

> RBF:We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ---
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14014) Unable to change the state of DN to maintenance using dfs.hosts.maintenance

2018-10-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-14014:
---

Assignee: venkata ram kumar ch

> Unable to change the state of DN  to maintenance using dfs.hosts.maintenance
> 
>
> Key: HDFS-14014
> URL: https://issues.apache.org/jira/browse/HDFS-14014
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Major
>
>  
> hdfs-site.xml configurations :
> 
>  dfs.namenode.maintenance.replication.min
>  1
>  
>  
>  dfs.namenode.hosts.provider.classname
>  
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager
>  
>  
>  dfs.hosts.maintenance
>  /opt/lifeline2/install/hadoop/namenode/etc/hadoop/maintenance
>  
> 
>  
> maintenance file :
> { "hostName": "vm1", "port": 50076, "adminState": "IN_MAINTENANCE", 
> "maintenanceExpireTimeInMS" : 1540204025000}
> Command : 
> /hadoop/namenode/bin # ./hdfs dfsadmin -refreshNodes
> 2018-10-22 17:45:54,286 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Refresh nodes failed for vm1:65110
> Refresh nodes failed for vm2:65110
> refreshNodes: 2 exceptions 
> [org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): (No 
> such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at java.io.FileInputStream.(FileInputStream.java:93)
>  at 
> org.apache.hadoop.hdfs.util.CombinedHostsFileReader.readFile(CombinedHostsFileReader.java:75)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:215)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:210)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshHostsReader(DatanodeManager.java:1195)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshNodes(DatanodeManager.java:1177)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.refreshNodes(FSNamesystem.java:4488)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshNodes(NameNodeRpcServer.java:1270)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.refreshNodes(ClientNamenodeProtocolServerSideTranslatorPB.java:913)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> , org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): (No 
> such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at java.io.FileInputStream.(FileInputStream.java:93)
>  at 
> org.apache.hadoop.hdfs.util.CombinedHostsFileReader.readFile(CombinedHostsFileReader.java:75)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:215)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:210)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshHostsReader(DatanodeManager.java:1195)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshNodes(DatanodeManager.java:1177)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.refreshNodes(FSNamesystem.java:4488)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshNodes(NameNodeRpcServer.java:1270)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.refreshNodes(ClientNamenodeProtocolServerSideTranslatorPB.java:913)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> 

[jira] [Created] (HDFS-14014) Unable to change the state of DN to maintenance using dfs.hosts.maintenance

2018-10-22 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-14014:
---

 Summary: Unable to change the state of DN  to maintenance using 
dfs.hosts.maintenance
 Key: HDFS-14014
 URL: https://issues.apache.org/jira/browse/HDFS-14014
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch


 

hdfs-site.xml configurations :


 dfs.namenode.maintenance.replication.min
 1
 
 
 dfs.namenode.hosts.provider.classname
 
org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager
 
 
 dfs.hosts.maintenance
 /opt/lifeline2/install/hadoop/namenode/etc/hadoop/maintenance
 


 

maintenance file :

{ "hostName": "vm1", "port": 50076, "adminState": "IN_MAINTENANCE", 
"maintenanceExpireTimeInMS" : 1540204025000}

Command : 

/hadoop/namenode/bin # ./hdfs dfsadmin -refreshNodes
2018-10-22 17:45:54,286 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Refresh nodes failed for vm1:65110
Refresh nodes failed for vm2:65110
refreshNodes: 2 exceptions 
[org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): (No such 
file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at java.io.FileInputStream.(FileInputStream.java:93)
 at 
org.apache.hadoop.hdfs.util.CombinedHostsFileReader.readFile(CombinedHostsFileReader.java:75)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:215)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:210)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshHostsReader(DatanodeManager.java:1195)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshNodes(DatanodeManager.java:1177)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.refreshNodes(FSNamesystem.java:4488)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshNodes(NameNodeRpcServer.java:1270)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.refreshNodes(ClientNamenodeProtocolServerSideTranslatorPB.java:913)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
, org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): (No 
such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at java.io.FileInputStream.(FileInputStream.java:93)
 at 
org.apache.hadoop.hdfs.util.CombinedHostsFileReader.readFile(CombinedHostsFileReader.java:75)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:215)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.refresh(CombinedHostFileManager.java:210)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshHostsReader(DatanodeManager.java:1195)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.refreshNodes(DatanodeManager.java:1177)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.refreshNodes(FSNamesystem.java:4488)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshNodes(NameNodeRpcServer.java:1270)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.refreshNodes(ClientNamenodeProtocolServerSideTranslatorPB.java:913)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

 


[jira] [Assigned] (HDFS-13939) [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13939:
---

Assignee: venkata ram kumar ch

> [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project
> 
>
> Key: HDFS-13939
> URL: https://issues.apache.org/jira/browse/HDFS-13939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: venkata ram kumar ch
>Priority: Major
>
> There are many javadoc errors on JDK 10 in hadoop-hdfs-project. Let's fix 
> them per project or module.
>  * hadoop-hdfs-project/hadoop-hdfs: 212 errors
>  * hadoop-hdfs-project/hadoop-hdfs-client: 85 errors
>  * hadoop-hdfs-project/hadoop-hdfs-rbf: 34 errors
> We can confirm the errors by below command.
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> {noformat}
> See also: HADOOP-15785



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13939) [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13939:
---

Assignee: (was: venkata ram kumar ch)

> [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project
> 
>
> Key: HDFS-13939
> URL: https://issues.apache.org/jira/browse/HDFS-13939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Priority: Major
>
> There are many javadoc errors on JDK 10 in hadoop-hdfs-project. Let's fix 
> them per project or module.
>  * hadoop-hdfs-project/hadoop-hdfs: 212 errors
>  * hadoop-hdfs-project/hadoop-hdfs-client: 85 errors
>  * hadoop-hdfs-project/hadoop-hdfs-rbf: 34 errors
> We can confirm the errors by below command.
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> {noformat}
> See also: HADOOP-15785



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13944:
---

Assignee: (was: venkata ram kumar ch)

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13939) [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13939:
---

Assignee: venkata ram kumar ch

> [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project
> 
>
> Key: HDFS-13939
> URL: https://issues.apache.org/jira/browse/HDFS-13939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: venkata ram kumar ch
>Priority: Major
>
> There are many javadoc errors on JDK 10 in hadoop-hdfs-project. Let's fix 
> them per project or module.
>  * hadoop-hdfs-project/hadoop-hdfs: 212 errors
>  * hadoop-hdfs-project/hadoop-hdfs-client: 85 errors
>  * hadoop-hdfs-project/hadoop-hdfs-rbf: 34 errors
> We can confirm the errors by below command.
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> {noformat}
> See also: HADOOP-15785



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13944:
---

Assignee: venkata ram kumar ch

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: venkata ram kumar ch
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-09-18 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620110#comment-16620110
 ] 

venkata ram kumar ch commented on HDFS-13839:
-

Thanks [~elgoiri] for reviewing the patch .

I will upload the patch with unit test as soon as possible.

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13839-001.patch
>
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13817) RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't work properly

2018-09-18 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13817:
---

Assignee: venkata ram kumar ch

> RBF: create mount point with RANDOM policy and with 2 Nameservices doesn't 
> work properly 
> -
>
> Key: HDFS-13817
> URL: https://issues.apache.org/jira/browse/HDFS-13817
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Harshakiran Reddy
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
>
> {{Scenario:-}} 
> # Create a mount point with RANDOM policy and with 2 Nameservices .
> # List the target mount path of the Global path.
> Actual Output: 
> === 
> {{ls: `/apps5': No such file or directory}}
> Expected Output: 
> =
> {{if the files are availabel list those files or if it's emtpy it will disply 
> nothing}}
> {noformat} 
> bin> ./hdfs dfsrouteradmin -add /apps5 hacluster,ns2 /tmp10 -order RANDOM 
> -owner securedn -group hadoop
> Successfully added mount point /apps5
> bin> ./hdfs dfs -ls /apps5
> ls: `/apps5': No such file or directory
> bin> ./hdfs dfs -ls /apps3
> Found 2 items
> drwxrwxrwx   - user group 0 2018-08-09 19:55 /apps3/apps1
> -rw-r--r--   3   - user group  4 2018-08-10 11:55 /apps3/ttt
>  {noformat}
> {{please refer the bellow image for mount inofrmation}}
> {{/apps3 tagged with HASH policy}}
> {{/apps5 tagged with RANDOM policy}}
> {noformat}
> /bin> ./hdfs dfsrouteradmin -ls
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /apps3hacluster->/tmp3,ns2->/tmp4 securedn
>   users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /apps5hacluster->/tmp5,ns2->/tmp5 securedn
>   users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-09-18 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619054#comment-16619054
 ] 

venkata ram kumar ch commented on HDFS-13839:
-

Uploaded the patch

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13839-001.patch
>
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-09-18 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13839:

Attachment: HDFS-13839-001.patch

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13839-001.patch
>
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-09-18 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13839:

Status: Patch Available  (was: Open)

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13839-001.patch
>
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-13835) RBF: Unable to add files after changing the order

2018-09-17 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13835 stopped by venkata ram kumar ch.
---
> RBF: Unable to add files after changing the order
> -
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13896:

Description: 
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
 18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Mount Table Entries:
 Source Destinations Owner Group Mode Quota/Usage
 /apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

WebUI : Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
16:50:54|2018/09/05 15:02:25|

 
  

 

 

 

  was:
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

WebUI : 
h1.  
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
16:50:54|2018/09/05 15:02:25|
 
 

 

 

 


> RBF Web UI not displaying clearly which target path is pointing to which name 
> service in mount table 
> -
>
> Key: HDFS-13896
> URL: https://issues.apache.org/jira/browse/HDFS-13896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
>
> Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
>  18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
>  18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
>  18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
>  18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Mount Table Entries:
>  Source Destinations Owner Group Mode Quota/Usage
>  /apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
> rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
> WebUI : Mount Table
> ||Global path||Target nameservice||Target 

[jira] [Updated] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13896:

Description: 
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

WebUI : 
h1.  
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
16:50:54|2018/09/05 15:02:25|
 
 

 

 

 

  was:
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Mount Table Entries:
 Source Destinations Owner Group Mode Quota/Usage
 /apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

WebUI : 
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
15:02:54|2018/09/05 15:02:25|

 

 

 

 

 


> RBF Web UI not displaying clearly which target path is pointing to which name 
> service in mount table 
> -
>
> Key: HDFS-13896
> URL: https://issues.apache.org/jira/browse/HDFS-13896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
>
> Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
>  18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
>  18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: /HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt2
> 18/09/05 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Successfully added mount point /apps
> Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
>  18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /apps hacluster1->/opt,hacluster2->/opt1,hacluster1->/opt2 securedn users 
> rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
> WebUI : 
> h1.  
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/apps|hacluster1,hacluster2|/opt,/opt1,/opt2|HASH| 
> |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
> 16:50:54|2018/09/05 15:02:25|
>  
>  
>  
>  
>  



--

[jira] [Updated] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13896:

Description: 
Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
 18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Successfully added mount point /apps

Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
 18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
 Mount Table Entries:
 Source Destinations Owner Group Mode Quota/Usage
 /apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

WebUI : 
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
15:02:54|2018/09/05 15:02:25|

 

 

 

 

 

  was:
Commands :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -rm /apps
18/09/05 12:31:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully removed mount point /apps

Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
INFO: Watching file:/opt/hadoopclient/HDFS/hadoop/etc/hadoop/log4j.properties 
for changes with interval : 6
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

WebUI : 
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
15:02:54|2018/09/05 15:02:25|

 

 

 

 

 


> RBF Web UI not displaying clearly which target path is pointing to which name 
> service in mount table 
> -
>
> Key: HDFS-13896
> URL: https://issues.apache.org/jira/browse/HDFS-13896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
>
> Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
>  18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Successfully added mount point /apps
> Command: ./hdfs dfsrouteradmin -add /apps hacluster2 /opt1
> 18/09/05 12:33:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Successfully added mount point /apps
> Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
>  18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
>  Mount Table Entries:
>  Source Destinations Owner Group Mode Quota/Usage
>  /apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
> -/-, SsQuota: -/-]
> WebUI : 
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
> |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
> 15:02:54|2018/09/05 15:02:25|
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-13896:
---

 Summary: RBF Web UI not displaying clearly which target path is 
pointing to which name service in mount table 
 Key: HDFS-13896
 URL: https://issues.apache.org/jira/browse/HDFS-13896
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch
Assignee: venkata ram kumar ch


Commands :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -rm /apps
18/09/05 12:31:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully removed mount point /apps

Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
INFO: Watching file:/opt/hadoopclient/HDFS/hadoop/etc/hadoop/log4j.properties 
for changes with interval : 6
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

WebUI : 
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
15:02:54|2018/09/05 15:02:25|

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-08-23 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13839:
---

Assignee: venkata ram kumar ch

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: UpdateMountTableEntryRequest isn't validating the record.

2018-08-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Description: 
In RBF when we try to update the existing mount entry by using the add command 
its creating a mount entry without performing the validation check / on the 
destination path.

command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the mount 
table)

Now when we use the below command on the same mount entry. 

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not performing 
the validation check for the second time).

 

  was:
In RBF when we try to update the existing mount entry by using the add command 
its creating a mount entry by taking -order as target path.

command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the mount 
table)

Now when we use the below command on the same mount entry. 

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not performing 
the validation check for the second time).

 

Summary:  RBF: UpdateMountTableEntryRequest isn't validating the 
record.  (was:  RBF: validation check was not done for adding the multiple 
destination to an existing mount entry.)

>  RBF: UpdateMountTableEntryRequest isn't validating the record.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In RBF when we try to update the existing mount entry by using the add 
> command its creating a mount entry without performing the validation check / 
> on the destination path.
> command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the 
> mount table)
> Now when we use the below command on the same mount entry. 
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not 
> performing the validation check for the second time).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13810) RBF: validation check was not done for adding the multiple destination to an existing mount entry.

2018-08-22 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589691#comment-16589691
 ] 

venkata ram kumar ch commented on HDFS-13810:
-

Thanks [~elgoiri] ,

Yes the description was little confusing . I updated the description with clear 
details.

>  RBF: validation check was not done for adding the multiple destination to an 
> existing mount entry.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In RBF when we try to update the existing mount entry by using the add 
> command its creating a mount entry by taking -order as target path.
> command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the 
> mount table)
> Now when we use the below command on the same mount entry. 
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not 
> performing the validation check for the second time).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: validation check was not done for adding the multiple destination to an existing mount entry.

2018-08-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Description: 
In RBF when we try to update the existing mount entry by using the add command 
its creating a mount entry by taking -order as target path.

command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the mount 
table)

Now when we use the below command on the same mount entry. 

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not performing 
the validation check for the second time).

 

  was:
In Router based federation when we  try to add the mount entry without having 
the destination path, its getting  added into the mount table by taking the 
other parameters order as destination path.

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 

its creating a mount entry by taking -order as target path

Summary:  RBF: validation check was not done for adding the multiple 
destination to an existing mount entry.  (was:  RBF: Adding the mount entry 
without having the destination path, its getting  added into the mount table by 
taking the other parameters order as destination path.)

>  RBF: validation check was not done for adding the multiple destination to an 
> existing mount entry.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In RBF when we try to update the existing mount entry by using the add 
> command its creating a mount entry by taking -order as target path.
> command : ./hdfs dfsrouteradmin -add /aaa ns1 /tmp (record  added to the 
> mount table)
> Now when we use the below command on the same mount entry. 
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM  (its not 
> performing the validation check for the second time).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13835) RBF: Unable to Add files after changing the order

2018-08-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13835 started by venkata ram kumar ch.
---
> RBF: Unable to Add files after changing the order
> -
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13810) RBF: Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-22 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16588822#comment-16588822
 ] 

venkata ram kumar ch commented on HDFS-13810:
-

Uploaded the patch.

>  RBF: Adding the mount entry without having the destination path, its getting 
>  added into the mount table by taking the other parameters order as 
> destination path.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-22 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Attachment: HDFS-13810-002.patch

>  RBF: Adding the mount entry without having the destination path, its getting 
>  added into the mount table by taking the other parameters order as 
> destination path.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810-002.patch, HDFS-13810.patch
>
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13835) RBF: Unable to Add files after changing the order

2018-08-21 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16588399#comment-16588399
 ] 

venkata ram kumar ch commented on HDFS-13835:
-

Thanks [~elgoiri] 
 
 Command : ./hdfs dfs -put file1 /apps/file2    ( i am getting the below error 
message)
2018-08-22 12:50:43,530 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
put: `/apps/file2': No such file or directory: `hdfs://ns-fed/apps/file2'

> RBF: Unable to Add files after changing the order
> -
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-21 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Affects Version/s: 2.9.1
   Status: Patch Available  (was: Open)

>  RBF: Adding the mount entry without having the destination path, its getting 
>  added into the mount table by taking the other parameters order as 
> destination path.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.1, 3.0.0
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810.patch
>
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) RBF: Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-21 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Attachment: HDFS-13810.patch

>  RBF: Adding the mount entry without having the destination path, its getting 
>  added into the mount table by taking the other parameters order as 
> destination path.
> ---
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0, 2.9.1
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Minor
> Attachments: HDFS-13810.patch
>
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13835) Unable to Add files after changing the order in RBF

2018-08-21 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586932#comment-16586932
 ] 

venkata ram kumar ch edited comment on HDFS-13835 at 8/21/18 7:01 AM:
--

Hi,

Commands : ./hdfs dfsrouteradmin -add /apps ns1 /tmp1 ( /apps pointing to  
/tmp1 destination path)

Now i will add  one more destination path (/tmp2) using the command

./hdfs dfsrouteradmin   -add  /apps ns2 /tmp2 (here by default its taking the 
order as  HASH)

Now if i update the above mount entry using the command

./hdfs dfsrouteradmin -update /apps ns1,ns2 /tmp1,/tmp2 -order RANDOM  (And if 
i try to add any files to the mount entry /apps its giving errors).

 


was (Author: ramkumar):
Hi,

Commands : ./hdfs dfsrouteradmin -add /apps ns1 /tmp1 ( /apps pointing to  
/tmp1 destination path)

Now i will add  one more destination path (/tmp2) using the command

./hdfs dfsrouteradmin   -add  /apps ns1 /tmp2 (here by default its taking the 
order as  HASH)

Now if i update the above mount entry using the command

./hdfs dfsrouteradmin -update /apps ns1 /tmp1,/tmp2 -order RANDOM  (And if i 
try to add any files to the mount entry /apps its giving errors).

 

> Unable to Add files after changing the order in RBF
> ---
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13835) Unable to Add files after changing the order in RBF

2018-08-20 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586932#comment-16586932
 ] 

venkata ram kumar ch edited comment on HDFS-13835 at 8/21/18 4:57 AM:
--

Hi,

Commands : ./hdfs dfsrouteradmin -add /apps ns1 /tmp1 ( /apps pointing to  
/tmp1 destination path)

Now i will add  one more destination path (/tmp2) using the command

./hdfs dfsrouteradmin   -add  /apps ns1 /tmp2 (here by default its taking the 
order as  HASH)

Now if i update the above mount entry using the command

./hdfs dfsrouteradmin -update /apps ns1 /tmp1,/tmp2 -order RANDOM  (And if i 
try to add any files to the mount entry /apps its giving errors).

 


was (Author: ramkumar):
Hi,

Commands : ./hdfs dfsrouteradmin -add /apps ns1 /tmp1 ( /apps pointing to  
/tmp1 destination path)

Now i will add  one more destination path (/tmp2) using the command

./hdfs dfsrouteradmin   -add  /apps ns1 /tmp2 (here by default its taking the 
order as  HASH)

Now if i update the above mount entry using the command

./hdfs dfsrouteradmin -update /apps ns1 /tmp1,/tmp2 -order RANDOM  (now if i 
try to add any files to the mount entry /apps its giving errors).

 

> Unable to Add files after changing the order in RBF
> ---
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13835) Unable to Add files after changing the order in RBF

2018-08-20 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586932#comment-16586932
 ] 

venkata ram kumar ch commented on HDFS-13835:
-

Hi,

Commands : ./hdfs dfsrouteradmin -add /apps ns1 /tmp1 ( /apps pointing to  
/tmp1 destination path)

Now i will add  one more destination path (/tmp2) using the command

./hdfs dfsrouteradmin   -add  /apps ns1 /tmp2 (here by default its taking the 
order as  HASH)

Now if i update the above mount entry using the command

./hdfs dfsrouteradmin -update /apps ns1 /tmp1,/tmp2 -order RANDOM  (now if i 
try to add any files to the mount entry /apps its giving errors).

 

> Unable to Add files after changing the order in RBF
> ---
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ram kumar ch
>Assignee: venkata ram kumar ch
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13835) Unable to Add files after changing the order in RBF

2018-08-20 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-13835:
---

 Summary: Unable to Add files after changing the order in RBF
 Key: HDFS-13835
 URL: https://issues.apache.org/jira/browse/HDFS-13835
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch
Assignee: venkata ram kumar ch


When  a mount point it pointing to multiple sub cluster by default the order is 
HASH.

But After changing the order from HASH to RANDOM i am unable to add files to 
that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13810) Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-09 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574719#comment-16574719
 ] 

venkata ram kumar ch edited comment on HDFS-13810 at 8/9/18 1:03 PM:
-

Hi

I would like to work on this issue . can some one please assign  this issue  to 
me


was (Author: ramkumar):
Hi

I would like to work on this issue . can you please assign  this issue  to me

>  Adding the mount entry without having the destination path, its getting  
> added into the mount table by taking the other parameters order as 
> destination path.
> --
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13810) Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-09 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574719#comment-16574719
 ] 

venkata ram kumar ch commented on HDFS-13810:
-

Hi

I would like to work on this issue . can you please assign  this issue  to me

>  Adding the mount entry without having the destination path, its getting  
> added into the mount table by taking the other parameters order as 
> destination path.
> --
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) when we try to add the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-09 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Summary: when we try to add the mount entry without having the destination 
path, its getting  added into the mount table by taking the other parameters 
order as destination path.  (was: In Router based federation when we  try to 
add the mount entry without having the destination path, its getting  added 
into the mount table by taking the other parameters order as destination path.)

> when we try to add the mount entry without having the destination path, its 
> getting  added into the mount table by taking the other parameters order as 
> destination path.
> -
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) Adding the mount entry without having the destination path, its getting added into the mount table by taking the other parameters order as destination path.

2018-08-09 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Summary:  Adding the mount entry without having the destination path, its 
getting  added into the mount table by taking the other parameters order as 
destination path.  (was: when we try to add the mount entry without having the 
destination path, its getting  added into the mount table by taking the other 
parameters order as destination path.)

>  Adding the mount entry without having the destination path, its getting  
> added into the mount table by taking the other parameters order as 
> destination path.
> --
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) In Router based federation when we try to add the mount entry without having the destination path, its getting added into the mount table by taking the other parameters

2018-08-09 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Attachment: (was: Capture1.PNG)

> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> -
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) In Router based federation when we try to add the mount entry without having the destination path, its getting added into the mount table by taking the other parameters

2018-08-09 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Attachment: (was: Capture.PNG)

> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> -
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
> Attachments: Capture1.PNG
>
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13810) In Router based federation when we try to add the mount entry without having the destination path, its getting added into the mount table by taking the other parameters

2018-08-09 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13810:

Priority: Minor  (was: Major)

> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> -
>
> Key: HDFS-13810
> URL: https://issues.apache.org/jira/browse/HDFS-13810
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0
>Reporter: venkata ram kumar ch
>Priority: Minor
> Attachments: Capture.PNG, Capture1.PNG
>
>
> In Router based federation when we  try to add the mount entry without having 
> the destination path, its getting  added into the mount table by taking the 
> other parameters order as destination path.
> Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 
> its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13810) In Router based federation when we try to add the mount entry without having the destination path, its getting added into the mount table by taking the other parameters

2018-08-09 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-13810:
---

 Summary: In Router based federation when we  try to add the mount 
entry without having the destination path, its getting  added into the mount 
table by taking the other parameters order as destination path.
 Key: HDFS-13810
 URL: https://issues.apache.org/jira/browse/HDFS-13810
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Affects Versions: 3.0.0
Reporter: venkata ram kumar ch
 Attachments: Capture.PNG, Capture1.PNG

In Router based federation when we  try to add the mount entry without having 
the destination path, its getting  added into the mount table by taking the 
other parameters order as destination path.

Command : hdfs dfsrouteradmin -add /aaa ns1  -order RANDOM 

its creating a mount entry by taking -order as target path



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-06-28 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch updated HDFS-13596:

Comment: was deleted

(was: ROhith Sharma k s)

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-06-28 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527186#comment-16527186
 ] 

venkata ram kumar ch commented on HDFS-13596:
-

ROhith Sharma k s

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at