[jira] [Created] (HDFS-11226) Storagepolicy command is not working with "-fs" option

2016-12-08 Thread Archana T (JIRA)
Archana T created HDFS-11226:


 Summary: Storagepolicy command is not working with "-fs" option
 Key: HDFS-11226
 URL: https://issues.apache.org/jira/browse/HDFS-11226
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Priority: Minor


When StoragePolicy cmd is used with -fs option -- 
Following Error is thrown --

 {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
Can't understand command '-fs' {color}
Usage: bin/hdfs storagepolicies 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11141) [viewfs] Listfile gives complete Realm as User

2016-11-15 Thread Archana T (JIRA)
Archana T created HDFS-11141:


 Summary: [viewfs] Listfile gives complete Realm as User
 Key: HDFS-11141
 URL: https://issues.apache.org/jira/browse/HDFS-11141
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Reporter: Archana T
Priority: Minor



When defaultFS is configured as viewfs --

fs.defaultFS
viewfs://CLUSTER/

List Files showing Realm as User  --
hdfs dfs -ls /
Found 2 items
-r-xr-xr-x   - {color:red} h...@hadoop.com {color} hadoop  0 2016-11-07 
15:31 /Dir1
-r-xr-xr-x   - {color:red} h...@hadoop.com {color} hadoop  0 2016-11-07 
15:31 /Dir2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-9885) In distcp cmd ouput, Display name should be given for org.apache.hadoop.tools.mapred.CopyMapper$Counter.

2016-03-02 Thread Archana T (JIRA)
Archana T created HDFS-9885:
---

 Summary: In distcp cmd ouput, Display name should be given for 
org.apache.hadoop.tools.mapred.CopyMapper$Counter.
 Key: HDFS-9885
 URL: https://issues.apache.org/jira/browse/HDFS-9885
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Archana T
Priority: Minor


In distcp cmd output,

hadoop distcp hdfs://NN1:port/file1 hdfs://NN2:port/file2

16/02/29 07:05:55 INFO tools.DistCp: DistCp job-id: job_1456729398560_0002
16/02/29 07:05:55 INFO mapreduce.Job: Running job: job_1456729398560_0002
16/02/29 07:06:01 INFO mapreduce.Job: Job job_1456729398560_0002 running in 
uber mode : false
16/02/29 07:06:01 INFO mapreduce.Job: map 0% reduce 0%
16/02/29 07:06:06 INFO mapreduce.Job: map 100% reduce 0%
16/02/29 07:06:07 INFO mapreduce.Job: Job job_1456729398560_0002 completed 
successfully
...
...
File Input Format Counters
Bytes Read=212
File Output Format Counters
Bytes Written=0{color:red} 
org.apache.hadoop.tools.mapred.CopyMapper$Counter
{color}
BANDWIDTH_IN_BYTES=12418
BYTESCOPIED=12418
BYTESEXPECTED=12418
COPY=1

Expected:
Display Name can be given instead of 
{color:red}"org.apache.hadoop.tools.mapred.CopyMapper$Counter"{color}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-08 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9455:

Assignee: Daisuke Kobayashi  (was: Archana T)

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>Reporter: Archana T
>Assignee: Daisuke Kobayashi
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-08 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088929#comment-15088929
 ] 

Archana T commented on HDFS-9455:
-

Hi [~daisuke.kobayashi]
I agee with the above proposal.

Assigning this Jira to you.

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-05 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082670#comment-15082670
 ] 

Archana T commented on HDFS-9455:
-

Hi [~daisuke.kobayashi]
I got the issue in OpenSource 2.7.1 

Scenario --
In a secured cluster with ssl enabled, I was using "webhdfs" instead of 
"swebhdfs" in below cmd --
hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp

Reference-
HDFS-9483

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9605) Add links to failed volumes to explorer.html in HDFS Web UI

2016-01-05 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15084960#comment-15084960
 ] 

Archana T commented on HDFS-9605:
-

Hi [~wheat9]
Thanks for the commit.

> Add links to failed volumes to explorer.html in HDFS Web UI
> ---
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9605) "tab-datanode-volume-failures" is missing from explorer.html

2015-12-30 Thread Archana T (JIRA)
Archana T created HDFS-9605:
---

 Summary: "tab-datanode-volume-failures" is missing from 
explorer.html
 Key: HDFS-9605
 URL: https://issues.apache.org/jira/browse/HDFS-9605
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Archana T
Priority: Minor


In NameNode UI ,
"tab-datanode-volume-failures" is missing from explorer.html

Attached snapshot for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9605) "tab-datanode-volume-failures" is missing from explorer.html

2015-12-30 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9605:

Attachment: HDFS-9605.patch

> "tab-datanode-volume-failures" is missing from explorer.html
> 
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html
> Attached snapshot for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9605) "tab-datanode-volume-failures" is missing from explorer.html

2015-12-30 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074943#comment-15074943
 ] 

Archana T commented on HDFS-9605:
-

Attached patch for the same.
Kindly review.

> "tab-datanode-volume-failures" is missing from explorer.html
> 
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9605) "tab-datanode-volume-failures" is missing from explorer.html

2015-12-30 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9605:

Status: Patch Available  (was: Open)

> "tab-datanode-volume-failures" is missing from explorer.html
> 
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9605) "tab-datanode-volume-failures" is missing from explorer.html

2015-12-30 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9605:

Description: 
In NameNode UI ,
"tab-datanode-volume-failures" is missing from explorer.html


  was:
In NameNode UI ,
"tab-datanode-volume-failures" is missing from explorer.html

Attached snapshot for the same.


> "tab-datanode-volume-failures" is missing from explorer.html
> 
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9478) Reason for failing ipc.FairCallQueue contruction should be thrown

2015-11-30 Thread Archana T (JIRA)
Archana T created HDFS-9478:
---

 Summary: Reason for failing ipc.FairCallQueue contruction should 
be thrown
 Key: HDFS-9478
 URL: https://issues.apache.org/jira/browse/HDFS-9478
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Ajith S
Priority: Minor


When FairCallQueue Construction fails, NN fails to start throwing 
RunTimeException without throwing any reason on why it fails.

2015-11-30 17:45:26,661 INFO org.apache.hadoop.ipc.FairCallQueue: FairCallQueue 
is in use with 4 queues.
2015-11-30 17:45:26,665 DEBUG org.apache.hadoop.metrics2.util.MBeans: 
Registered Hadoop:service=ipc.65110,name=DecayRpcScheduler
2015-11-30 17:45:26,666 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.lang.RuntimeException: org.apache.hadoop.ipc.FairCallQueue could not be 
constructed.
at 
org.apache.hadoop.ipc.CallQueueManager.createCallQueueInstance(CallQueueManager.java:96)
at org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:55)
at org.apache.hadoop.ipc.Server.(Server.java:2241)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:942)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:784)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:346)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:750)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:687)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:889)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:872)


Example: reason for above failure could have been --
1. the weights were not equal to the number of queues configured.
2. decay-scheduler.thresholds not in sync with number of queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9455) Invalid Argument Error thrown in case of filesystem operation failure

2015-11-24 Thread Archana T (JIRA)
Archana T created HDFS-9455:
---

 Summary: Invalid Argument Error thrown in case of filesystem 
operation failure
 Key: HDFS-9455
 URL: https://issues.apache.org/jira/browse/HDFS-9455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Archana T
Assignee: Archana T
Priority: Minor


When Filesystem Operation failure happens during discp, 
Wrong exception : Invalid Argument thrown along with distcp command Usage.

{color:red} 
hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
Invalid arguments: Unexpected end of file from server
usage: distcp OPTIONS [source_path...] 
  OPTIONS
 -append   Reuse existing data in target files and
   append new data to them if possible
 -asyncShould distcp execution be blocking
 -atomic   Commit all changes or none
 -bandwidth   Specify bandwidth per map in MB
 -delete   Delete from target, files missing in source
 -diffUse snapshot diff report to identify the
   difference between source and target
 -f   List of files that need to be copied
 -filelimit   (Deprecated!) Limit number of files copied
   to <= n
 -iIgnore failures during copy
.
{color} 

Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2015-11-24 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9455:

Summary: In distcp, Invalid Argument Error thrown in case of filesystem 
operation failure  (was: Invalid Argument Error thrown in case of filesystem 
operation failure)

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9357) NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".

2015-11-02 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9357:

Attachment: decommisioned_n_dead_.png

> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".
> --
>
> Key: HDFS-9357
> URL: https://issues.apache.org/jira/browse/HDFS-9357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: decommisioned_n_dead_.png, decommissioned_.png
>
>
> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead"
> Root Cause --
> "Decommissioned" and "Decommissioned & dead" icon not reflected on NN UI
> When DN is in Decommissioned status or in "Decommissioned & dead" status, 
> same status is not reflected on NN UI 
> DN status is as below --
> hdfs dfsadmin -report
> Name: 10.xx.xx.xx1:50076 (host-xx1)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Configured Capacity: 230501634048 (214.67 GB)
> DFS Used: 36864 (36 KB)
> Dead datanodes (1):
> Name: 10.xx.xx.xx2:50076 (host-xx2)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Same is not reflected on NN UI.
> Attached NN UI snapshots for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9357) NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".

2015-11-02 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9357:

Attachment: decommissioned_.png

> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".
> --
>
> Key: HDFS-9357
> URL: https://issues.apache.org/jira/browse/HDFS-9357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: decommisioned_n_dead_.png, decommissioned_.png
>
>
> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead"
> Root Cause --
> "Decommissioned" and "Decommissioned & dead" icon not reflected on NN UI
> When DN is in Decommissioned status or in "Decommissioned & dead" status, 
> same status is not reflected on NN UI 
> DN status is as below --
> hdfs dfsadmin -report
> Name: 10.xx.xx.xx1:50076 (host-xx1)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Configured Capacity: 230501634048 (214.67 GB)
> DFS Used: 36864 (36 KB)
> Dead datanodes (1):
> Name: 10.xx.xx.xx2:50076 (host-xx2)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Same is not reflected on NN UI.
> Attached NN UI snapshots for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9357) NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".

2015-11-01 Thread Archana T (JIRA)
Archana T created HDFS-9357:
---

 Summary: NN UI is not showing which DN is "Decommissioned "and 
"Decommissioned & dead".
 Key: HDFS-9357
 URL: https://issues.apache.org/jira/browse/HDFS-9357
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Critical


NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead"

Root Cause --
"Decommissioned" and "Decommissioned & dead" icon not reflected on NN UI

When DN is in Decommissioned status or in "Decommissioned & dead" status, same 
status is not reflected on NN UI 

DN status is as below --

hdfs dfsadmin -report

Name: 10.xx.xx.xx1:50076 (host-xx1)
Hostname: host-xx
Decommission Status : Decommissioned
Configured Capacity: 230501634048 (214.67 GB)
DFS Used: 36864 (36 KB)


Dead datanodes (1):
Name: 10.xx.xx.xx2:50076 (host-xx2)
Hostname: host-xx
Decommission Status : Decommissioned

Same is not reflected on NN UI.

Attached NN UI snapshots for the same.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-01 Thread Archana T (JIRA)
Archana T created HDFS-9356:
---

 Summary: Last Contact value is empty in Datanode Info tab while 
Decommissioning 
 Key: HDFS-9356
 URL: https://issues.apache.org/jira/browse/HDFS-9356
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore


While DN is in decommissioning state, the Last contact value is empty in the 
Datanode Information tab of Namenode UI.

Attaching the snapshot of the same.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-01 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9356:

Attachment: decomm.png

> Last Contact value is empty in Datanode Info tab while Decommissioning 
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9171) [OIV] : ArrayIndexOutOfBoundsException thrown when step is more than maxsize in FileDistribution processor

2015-09-28 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934617#comment-14934617
 ] 

Archana T commented on HDFS-9171:
-

Assigning to Nijel as he is already looking into OIV improvement task

> [OIV] : ArrayIndexOutOfBoundsException thrown when step is more than maxsize 
> in FileDistribution processor
> --
>
> Key: HDFS-9171
> URL: https://issues.apache.org/jira/browse/HDFS-9171
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Archana T
>Assignee: nijel
>Priority: Minor
>
> When step size is more than maxsize in File Distribution processor --
> hdfs oiv -i /NAME_DIR/fsimage_0007854 -o out --processor 
> FileDistribution {color:red} -maxSize 1000 -step 5000 {color} ; cat out
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:131)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:108)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:165)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)
> Processed 0 inodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9171) [OIV] : ArrayIndexOutOfBoundsException thrown when step is more than maxsize in FileDistribution processor

2015-09-28 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9171:

Description: 
When step size is more than maxsize in File Distribution processor --

hdfs oiv -i /NAME_DIR/fsimage_0007854 -o out --processor 
FileDistribution {color:red} -maxSize 1000 -step 5000 {color} ; cat out
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:131)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:108)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:165)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)
Processed 0 inodes.

> [OIV] : ArrayIndexOutOfBoundsException thrown when step is more than maxsize 
> in FileDistribution processor
> --
>
> Key: HDFS-9171
> URL: https://issues.apache.org/jira/browse/HDFS-9171
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Archana T
>Assignee: nijel
>Priority: Minor
>
> When step size is more than maxsize in File Distribution processor --
> hdfs oiv -i /NAME_DIR/fsimage_0007854 -o out --processor 
> FileDistribution {color:red} -maxSize 1000 -step 5000 {color} ; cat out
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:131)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:108)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:165)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)
> Processed 0 inodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9171) [OIV] : ArrayIndexOutOfBoundsException thrown when step is more than maxsize in FileDistribution processor

2015-09-28 Thread Archana T (JIRA)
Archana T created HDFS-9171:
---

 Summary: [OIV] : ArrayIndexOutOfBoundsException thrown when step 
is more than maxsize in FileDistribution processor
 Key: HDFS-9171
 URL: https://issues.apache.org/jira/browse/HDFS-9171
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Archana T
Assignee: nijel
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9151) Mover should print the exit status/reason on console like balancer tool.

2015-09-27 Thread Archana T (JIRA)
Archana T created HDFS-9151:
---

 Summary: Mover should print the exit status/reason on console like 
balancer tool.
 Key: HDFS-9151
 URL: https://issues.apache.org/jira/browse/HDFS-9151
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Minor


Mover should print exit reason on console --

In cases where there is No blocks to move or unavailable Storages or any other, 
Mover tool gives No information on exit reason on the console--
{code}
# ./hdfs mover
...
Sep 28, 2015 12:31:25 PM Mover took 10sec
# echo $?
0

# ./hdfs mover
...
Sep 28, 2015 12:33:10 PM Mover took 1sec
# echo $?
254
{code}

Unlike Balancer prints exit reason 
example--
#./hdfs balancer
...
{color:red}The cluster is balanced. Exiting...{color}
Sep 28, 2015 12:18:02 PM Balancing took 1.744 seconds




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9120) Metric logging values are truncated in NN Metrics log.

2015-09-22 Thread Archana T (JIRA)
Archana T created HDFS-9120:
---

 Summary: Metric logging values are truncated in NN Metrics log.
 Key: HDFS-9120
 URL: https://issues.apache.org/jira/browse/HDFS-9120
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: logging
Reporter: Archana T
Assignee: Kanaka Kumar Avvaru


In namenode-metrics.log when metric name value pair is more than 128 
characters, it is truncated as below --

Example for LiveNodes information is ---
vi namenode-metrics.log
{color:red}
2015-09-22 10:34:37,891 
NameNodeInfo:LiveNodes={"host-10-xx-xxx-88:50076":{"infoAddr":"10.xx.xxx.88:0","infoSecureAddr":"10.xx.xxx.88:52100","xferaddr":"10.xx.xxx.88:50076","l...
{color}

Here complete information of metric value is not logged.
etc information being displayed as "..."
Silimarly for other metric values in NN metrics.

where as DN metric logs complete metric values.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9089) Balancer and Mover should use ".system" as reserved inode name instead of "system"

2015-09-16 Thread Archana T (JIRA)
Archana T created HDFS-9089:
---

 Summary: Balancer and Mover should use ".system" as reserved inode 
name instead of "system"
 Key: HDFS-9089
 URL: https://issues.apache.org/jira/browse/HDFS-9089
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Reporter: Archana T
Assignee: Surendra Singh Lilhore


Currently Balancer and Mover create "/system" for placing mover.id and 
balancer.id

hdfs dfs -ls /

drwxr-xr-x   - root hadoop  0 2015-09-16 12:49 {color:red}/system{color}

This folder created in not deleted once mover or balancer work is completed So 
user cannot create dir "system" .

Its better to make ".system" as reserved inode for balancer and mover instead 
of "system".




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9034) When we remove one storage type from all the DNs, still NN UI shows entry of those storage types.

2015-09-07 Thread Archana T (JIRA)
Archana T created HDFS-9034:
---

 Summary: When we remove one storage type from all the DNs, still 
NN UI shows entry of those storage types.
 Key: HDFS-9034
 URL: https://issues.apache.org/jira/browse/HDFS-9034
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore


When we remove one storage type from all the DNs, still NN UI shows entry of 
those storage type --

Ex:for ARCHIVE
Steps--
1. ARCHIVE Storage type was added for all DNs
2. Stop DNs
3. Removed ARCHIVE Storages from all DNs
4. Restarted DNs

NN UI shows below --

DFS Storage Types
Storage Type Configured Capacity Capacity Used Capacity Remaining 
{color:red}ARCHIVE ()   
() {color}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9033) In a metasave file, "NaN" is getting printed for cacheused%

2015-09-07 Thread Archana T (JIRA)
Archana T created HDFS-9033:
---

 Summary: In a metasave file, "NaN" is getting printed for 
cacheused%
 Key: HDFS-9033
 URL: https://issues.apache.org/jira/browse/HDFS-9033
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor


In a metasave file, "NaN" is getting printed for cacheused% --

For metasave file --
hdfs dfsadmin -metasave fnew

vi fnew
Metasave: Number of datanodes: 3
DN1:50076 IN 211378954240(196.86 GB) 2457942(2.34 MB) 0.00% 185318637568(172.59 
GB) 0(0 B) 0(0 B) {color:red}NaN% {color}0(0 B) Mon Sep 07 17:22:42

In DN report, Cache is  -
hdfs dfsadmin -report
Decommission Status : Normal
Configured Capacity: 211378954240 (196.86 GB)
DFS Used: 3121152 (2.98 MB)
Non DFS Used: 16376107008 (15.25 GB)
DFS Remaining: 194999726080 (181.61 GB)
DFS Used%: 0.00%
DFS Remaining%: 92.25%
{color:red}
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
{color}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9034) When we remove one storage type from all the DNs, still NN UI shows entry of those storage types.

2015-09-07 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9034:

Attachment: dfsStorage_NN_UI2.png

NN UI Snaphot attached

> When we remove one storage type from all the DNs, still NN UI shows entry of 
> those storage types.
> -
>
> Key: HDFS-9034
> URL: https://issues.apache.org/jira/browse/HDFS-9034
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: dfsStorage_NN_UI2.png
>
>
> When we remove one storage type from all the DNs, still NN UI shows entry of 
> those storage type --
> Ex:for ARCHIVE
> Steps--
> 1. ARCHIVE Storage type was added for all DNs
> 2. Stop DNs
> 3. Removed ARCHIVE Storages from all DNs
> 4. Restarted DNs
> NN UI shows below --
> DFS Storage Types
> Storage Type Configured Capacity Capacity Used Capacity Remaining 
> {color:red}ARCHIVE () 
>   () {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8741) Proper error msg to be printed when invalid operation type is given to WebHDFS operations.

2015-07-08 Thread Archana T (JIRA)
Archana T created HDFS-8741:
---

 Summary: Proper error msg to be printed when invalid operation 
type is given to WebHDFS operations.
 Key: HDFS-8741
 URL: https://issues.apache.org/jira/browse/HDFS-8741
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Archana T
Priority: Minor


When wrong operation type is given to WebHDFS operations, following Error 
message is printed --

For ex: CREATE is called with GET instead of PUT--

HTTP/1.1 400 Bad Request
..
{RemoteException:{exception:IllegalArgumentException,javaClassName:java.lang.IllegalArgumentException,message:Invalid
 value for webhdfs parameter \op\: {color:red}No enum constant 
org.apache.hadoop.hdfs.web.resources.PutOpParam.Op.CREATE}}{color}

Expected--
Valid Error message to be printed




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8625) count with -h option displays namespace quota in human readable format

2015-06-21 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14595399#comment-14595399
 ] 

Archana T commented on HDFS-8625:
-

Thanks [~aw] for looking into this issue.
{quote}
If there's a bug here, it's that one billion, it should be displaying B and 
instead of G, but that's pretty minor.
{quote}

When i test with below values i observe that it displays in G instead of B --
{noformat}
 ./hdfs dfsadmin -setQuota 1099511000 /dir123

 ./hdfs dfs -count -v -h -q /dir123
   QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
FILE_COUNT   CONTENT_SIZE PATHNAME
   1.0 G   1.0 Gnone inf1   
 0  0 /dir123

 ./hdfs dfsadmin -setQuota 1099511627776 /dir123

 ./hdfs dfs -count -v -h -q /dir123
   QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
FILE_COUNT   CONTENT_SIZE PATHNAME
 1 T   1.0 Tnone inf1   
 0  0 /dir123

{noformat}

 count with -h option displays namespace quota in human readable format
 --

 Key: HDFS-8625
 URL: https://issues.apache.org/jira/browse/HDFS-8625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Archana T
Assignee: Surendra Singh Lilhore
 Attachments: HDFS-8625.patch


 When 'count' command is executed with '-h' option , namespace quota is 
 displayed in human readable format --
 Example :
 hdfs dfsadmin -setQuota {color:red}1048576{color} /test
 hdfs dfs -count -q -h -v /test
{color:red}QUOTA   REM_QUOTA{color} SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
  {color:red}1 M   1.0 M{color}none 
 inf10  0 /test
 QUOTA and REM_QUOTA shows 1 M (human readable format) which actually should 
 give count value 1048576



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8625) count with -h option displays namespace quota in human readable format

2015-06-18 Thread Archana T (JIRA)
Archana T created HDFS-8625:
---

 Summary: count with -h option displays namespace quota in human 
readable format
 Key: HDFS-8625
 URL: https://issues.apache.org/jira/browse/HDFS-8625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Archana T
Assignee: surendra singh lilhore


When 'count' command is executed with '-h' option , namespace quota is 
displayed in human readable format --

Example :

hdfs dfsadmin -setQuota {color:red}1048576{color} /test

hdfs dfs -count -q -h -v /test
   {color:red}QUOTA   REM_QUOTA{color} SPACE_QUOTA REM_SPACE_QUOTA  
  DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
 {color:red}1 M   1.0 M{color}none inf  
  10  0 /test

QUOTA and REM_QUOTA shows 1 M (human readable format) which actually should 
give count value 1048576



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-01 Thread Archana T (JIRA)
Archana T created HDFS-8505:
---

 Summary: Truncate should not be success when Truncate Size and 
Current Size are equal.
 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Priority: Minor



Truncate should not be success when Truncate Size and Current Size are equal.

$ ./hdfs dfs -cat /file
abcdefgh

$ ./hdfs dfs -truncate -w 2 /file
Waiting for /file ...
Truncated /file to length: 2

$ ./hdfs dfs -cat /file
ab

{color:red}
$ ./hdfs dfs -truncate -w 2 /file
Truncated /file to length: 2
{color}

$ ./hdfs dfs -cat /file
ab

Expecting to throw Truncate Error:
-truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8490) Typo in trace enabled log in WebHDFS exception handler

2015-05-28 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-8490:

Status: Patch Available  (was: Open)

 Typo in trace enabled log in WebHDFS exception handler
 --

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-8490.patch


 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8490) Typo in trace enabled log in WebHDFS exception handler

2015-05-28 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-8490:

Attachment: HDFS-8490.patch

Kindly review

 Typo in trace enabled log in WebHDFS exception handler
 --

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-8490.patch


 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8490) Typo in trace enabled log in WebHDFS exception handler

2015-05-27 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T reassigned HDFS-8490:
---

Assignee: Archana T

 Typo in trace enabled log in WebHDFS exception handler
 --

 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Archana T
Priority: Trivial
  Labels: newbie

 /hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
 {code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
 Exception e = cause instanceof Exception ? (Exception) cause : new 
 Exception(cause);
 if (LOG.isTraceEnabled()) {
   LOG.trace(GOT EXCEPITION, e);
 }{code}
 EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-22 Thread Archana T (JIRA)
Archana T created HDFS-8465:
---

 Summary: Mover is success even when space exceeds storage quota.
 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Archana T
Assignee: surendra singh lilhore



*Steps :*
1. Create directory /dir 
2. Set its storage policy to HOT --
hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT

3. Insert files of total size 10,000B  into /dir.
4. Set above path /dir ARCHIVE type quota to 5,000B --
hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
{code}
hdfs dfs -count -v -q -h -t  /dir
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
 none   inf  none   inf 4.9 K   
  4.9 K /dir
{code}
5. Now change policy of '/dir' to COLD
6. Execute Mover command

*Observations:*
1. Mover is successful moving all 10,000B to ARCHIVE datapath.

2. Count command displays negative value '-59.4K'--
{code}
hdfs dfs -count -v -q -h -t  /dir
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
 none   inf  none   inf 4.9 K   
-59.4 K /dir
{code}
*Expected:*
Mover should not be successful as ARCHIVE quota is only 5,000B.
Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8389) Unhandled exception thrown when hadoop.rpc.protection is privacy in hdfs and in hbase it is authentication

2015-05-13 Thread Archana T (JIRA)
Archana T created HDFS-8389:
---

 Summary: Unhandled exception thrown when hadoop.rpc.protection 
is privacy in hdfs and in hbase it is authentication
 Key: HDFS-8389
 URL: https://issues.apache.org/jira/browse/HDFS-8389
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor


Unhandled exception thrown when hadoop.rpc.protection is privacy in hdfs 
and in hbase it is authentication


2015-05-13 22:40:18,772 | FATAL | master:51-196-28-1:21300 | Master server 
abort: loaded coprocessors are: [org.apache.hadoop.hbase.JMXListener] | 
org.apache.hadoop.hbase.master.HMaster.abort(HMaster.java:2279)
2015-05-13 22:40:18,773 | FATAL | master:51-196-28-1:21300 | Unhandled 
exception. Starting shutdown. | 
org.apache.hadoop.hbase.master.HMaster.abort(HMaster.java:2284)
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:375)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1631)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:500)
at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8388) Time and Date format need to be in sync in Namenode UI page

2015-05-13 Thread Archana T (JIRA)
Archana T created HDFS-8388:
---

 Summary: Time and Date format need to be in sync in Namenode UI 
page
 Key: HDFS-8388
 URL: https://issues.apache.org/jira/browse/HDFS-8388
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor


In NameNode UI Page, Date and Time FORMAT  displayed on the page are not in 
sync currently.

Started:Wed May 13 12:28:02 IST 2015

Compiled:23 Apr 2015 12:22:59 

Block Deletion Start Time   13 May 2015 12:28:02

We can keep a common format in all the above places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8173) NPE thrown at DataNode startup.

2015-04-17 Thread Archana T (JIRA)
Archana T created HDFS-8173:
---

 Summary: NPE thrown at DataNode startup.
 Key: HDFS-8173
 URL: https://issues.apache.org/jira/browse/HDFS-8173
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor


NPE thrown at Datanode startup --

{code}
2015-04-17 17:37:01,069 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Exception shutting down DataNode HttpServer
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1703)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:433)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2392)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2279)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2326)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2503)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2527)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8148) NPE thrown at Namenode startup,.

2015-04-15 Thread Archana T (JIRA)
Archana T created HDFS-8148:
---

 Summary:  NPE thrown at Namenode startup,.
 Key: HDFS-8148
 URL: https://issues.apache.org/jira/browse/HDFS-8148
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor


At Namenode startup, NPE thrown when unsupported config parameter configured in 
hdfs-site.xml 

{code}
2015-04-15 10:43:59,880 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1219)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1540)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:841)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8092) dfs -count -q should not consider snapshots under REM_QUOTA

2015-04-14 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493693#comment-14493693
 ] 

Archana T commented on HDFS-8092:
-

Hi [~aw]
The first two columns of hdfs count cmd refers to {{QUOTA REM_QUOTA}} 
NameQuota.
The issue i observed is on NameQuota not the SpaceQuota. AFAIK, the 
{{REM_QUOTA}} should not go to negative values.
I think namequota is not considered for snapshots creation, according to 
HDFS-4091 max snapshots created for a folder is 65K.


 dfs -count -q should not consider snapshots under REM_QUOTA
 ---

 Key: HDFS-8092
 URL: https://issues.apache.org/jira/browse/HDFS-8092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Reporter: Archana T
Assignee: Rakesh R
Priority: Minor

 dfs -count -q should not consider snapshots under Remaining quota
 List of Operations performed-
 1. hdfs dfs -mkdir /Dir1
 2. hdfs dfsadmin -setQuota 2 /Dir1
 3. hadoop fs -count -q -h -v /Dir1
  
QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2   {color:red} 1 {color}none 
 inf10  0 /Dir1
 4. hdfs dfs -put hdfs /Dir1/f1
 5. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2  {color:red}  0{color} none 
 inf11 11.4 K /Dir1
 6. hdfs dfsadmin -allowSnapshot /Dir1
 7. hdfs dfs -createSnapshot /Dir1
 8. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA
 DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2 {color:red}  -1 {color}none 
 inf21 11.4 K /Dir1
 Whenever snapshots created the value of REM_QUOTA gets decremented.
 When creation of snaphots are not considered under quota of that respective 
 dir then dfs -count should not decrement REM_QUOTA value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8111) NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd

2015-04-09 Thread Archana T (JIRA)
Archana T created HDFS-8111:
---

 Summary: NPE thrown when invalid FSImage filename given for hdfs 
oiv_legacy cmd
 Key: HDFS-8111
 URL: https://issues.apache.org/jira/browse/HDFS-8111
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor


NPE thrown when invalid filename is given as argument for hdfs oiv_legacy 
command

{code}
./hdfs oiv_legacy -i 
/home/hadoop/hadoop/hadoop-3.0.0/dfs/name/current/fsimage_00042 -o 
fsimage.txt 
Exception in thread main java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.go(OfflineImageViewer.java:140)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.main(OfflineImageViewer.java:260)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8092) dfs -count -q should not consider snapshots under REM_QUOTA

2015-04-08 Thread Archana T (JIRA)
Archana T created HDFS-8092:
---

 Summary: dfs -count -q should not consider snapshots under 
REM_QUOTA
 Key: HDFS-8092
 URL: https://issues.apache.org/jira/browse/HDFS-8092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Reporter: Archana T
Priority: Minor


dfs -count -q should not consider snapshots under Remaining quota

List of Operations performed-
1. hdfs dfs -mkdir /Dir1
2. hdfs dfsadmin -setQuota 2 /Dir1
3. hadoop fs -count -q -h -v /Dir1
 
   QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
   2   {color:red} 1 {color}none 
inf10  0 /Dir1

4. hdfs dfs -put hdfs /Dir1/f1
5. hadoop fs -count -q -h -v /Dir1
 QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA  
  DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
   2  {color:red}  0{color} none 
inf11 11.4 K /Dir1
6. hdfs dfsadmin -allowSnapshot /Dir1
7. hdfs dfs -createSnapshot /Dir1
8. hadoop fs -count -q -h -v /Dir1

 QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA
DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
   2 {color:red}  -1 {color}none 
inf21 11.4 K /Dir1

Whenever snapshots created the value of REM_QUOTA gets decremented.

When creation of snaphots are not considered under quota of that respective dir 
then dfs -count should not decrement REM_QUOTA value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7736) [HDFS]Few Command print incorrect command usage

2015-02-04 Thread Archana T (JIRA)
Archana T created HDFS-7736:
---

 Summary: [HDFS]Few Command print incorrect command usage
 Key: HDFS-7736
 URL: https://issues.apache.org/jira/browse/HDFS-7736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Trivial


Scenario --
Try the following hdfs commands --

Scenario --
Try the following hdfs commands --
1. 
# ./hdfs dfsadmin -getStoragePolicy
Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path]

Expected- 
Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path]

2.
# ./hdfs dfsadmin -setStoragePolicy
Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName]

Expected- 
Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName]

3.
# ./hdfs fsck
Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | 
-delete | -openforwrite] [-files [-blocks [-locations | -racks

Expected- 
Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move 
| -delete | -openforwrite] [-files [-blocks [-locations | -racks

4.
# ./hdfs snapshotDiff
Usage:
*{color:red}SnapshotDiff{color}* snapshotDir from to:

Expected- 
Usage:
*{color:green}snapshotDiff{color}* snapshotDir from to:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5687) Problem in accessing NN JSP page

2013-12-22 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13855437#comment-13855437
 ] 

Archana T commented on HDFS-5687:
-

Similar issue exists while browsing tail.jsp , browseBlock.jsp, 
block_info_xml.jsp from NN UI

 Problem in accessing NN JSP page
 

 Key: HDFS-5687
 URL: https://issues.apache.org/jira/browse/HDFS-5687
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: sathish
Priority: Minor

 In NN UI page After clicking the browse File System page,from that page,if 
 you click GO Back TO DFS HOME ICon it is not accessing the dfshealth.jsp page
 NN http URL is http://nnaddr///nninfoaddr/dfshealth.jsp,it is coming like 
 this,due to this i think it is not browsing that page
 It should be http://nninfoaddr/dfshealth.jsp/ like this



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)