[jira] [Created] (HDFS-7805) NameNode recovery prompt should be printed on console

2015-02-16 Thread surendra singh lilhore (JIRA)
surendra singh lilhore created HDFS-7805:


 Summary: NameNode recovery prompt should be printed on console
 Key: HDFS-7805
 URL: https://issues.apache.org/jira/browse/HDFS-7805
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore


In my cluster root logger is not console, so when I run namenode recovery tool 
MetaRecoveryContext.java prompt message is logged in log file.
Actually is should be display on console.

Currently it is like this

{code}
LOG.info(prompt);
{code}
It should be 
{code}
System.err.print(prompt);
{code}

NameNode recovery prompt should be printed on console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Gerson Carlos (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323016#comment-14323016
 ] 

Gerson Carlos commented on HDFS-6662:
-

I've upload the third version of the patch with [~ajisakaa] suggestions.

Just let me know if you have any more tips.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7804:
---
Attachment: HDFS-7804-002.patch

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7805) NameNode recovery prompt should be printed on console

2015-02-16 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-7805:
-
Attachment: HDFS-7805.patch

 NameNode recovery prompt should be printed on console
 -

 Key: HDFS-7805
 URL: https://issues.apache.org/jira/browse/HDFS-7805
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
 Attachments: HDFS-7805.patch


 In my cluster root logger is not console, so when I run namenode recovery 
 tool MetaRecoveryContext.java prompt message is logged in log file.
 Actually is should be display on console.
 Currently it is like this
 {code}
 LOG.info(prompt);
 {code}
 It should be 
 {code}
 System.err.print(prompt);
 {code}
 NameNode recovery prompt should be printed on console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322909#comment-14322909
 ] 

Hadoop QA commented on HDFS-5356:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699087/HDFS-5356-2.patch
  against trunk revision ab0b958.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9589//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9589//console

This message is automatically generated.

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323059#comment-14323059
 ] 

Brahma Reddy Battula commented on HDFS-7804:


Hi [~umamaheswararao]

Thanks a lot for taking look into this issue..

{quote}
why do we need to add hdfs also?
{quote}

As command will work with hdfs haadmin, I had given like that...But hdfs might 
not require here...thanks again for correcting

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7731) Can not start HA namenode with security enabled

2015-02-16 Thread surendra singh lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323066#comment-14323066
 ] 

surendra singh lilhore commented on HDFS-7731:
--

Hi, 

You are getting UNKNOWN_SERVER exception means, in your Kerberos database one 
of the hdfs principal is not available.

 Please check in your Kerberos database all principals are available are not in 
. 

One more thing SPINGO principal should be HTTP/_h...@bgdt.dev.hrb

 Can not start HA namenode with security enabled
 ---

 Key: HDFS-7731
 URL: https://issues.apache.org/jira/browse/HDFS-7731
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, journal-node, namenode, security
Affects Versions: 2.5.2
 Environment: Redhat6.2 Hadoop2.5.2
Reporter: donhoff_h
  Labels: hadoop, security

 I am converting a secure non-HA cluster into a secure HA cluster. After the 
 configuration and started all the journalnodes, I executed the following 
 commands on the original NameNode:
 1. hdfs name -initializeSharedEdits   #this step succeeded
 2. hadoop-daemon.sh start namenode  # this step failed.
 So the namenode can not be started. I verified that my principals are right. 
 And if I change back to the secure non-HA mode, the namenode can be started.
 The namenode log just reported the following errors and I could not find the 
 reason according to this log:
 2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
 Start loading edits file 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,154 ERROR 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
 initializing 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Server not 
 found in Kerberos database (7) - UNKNOWN_SERVER)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 

[jira] [Commented] (HDFS-7797) Add audit log for setQuota operation

2015-02-16 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323005#comment-14323005
 ] 

Uma Maheswara Rao G commented on HDFS-7797:
---

{quote}
 } finally {
+  logAuditEvent(success, setQuota, src);
   writeUnlock();
 }
 getEditLog().logSync();
{quote}
I think we should say success only after logSync.

 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7805) NameNode recovery prompt should be printed on console

2015-02-16 Thread surendra singh lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323026#comment-14323026
 ] 

surendra singh lilhore commented on HDFS-7805:
--

I have attached patch, please someone can review it.
Thanks in advance 


 NameNode recovery prompt should be printed on console
 -

 Key: HDFS-7805
 URL: https://issues.apache.org/jira/browse/HDFS-7805
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
 Attachments: HDFS-7805.patch


 In my cluster root logger is not console, so when I run namenode recovery 
 tool MetaRecoveryContext.java prompt message is logged in log file.
 Actually is should be display on console.
 Currently it is like this
 {code}
 LOG.info(prompt);
 {code}
 It should be 
 {code}
 System.err.print(prompt);
 {code}
 NameNode recovery prompt should be printed on console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7324) haadmin command usage prints incorrect command name

2015-02-16 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-7324:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Marked as resolved.

 haadmin command usage prints incorrect command name
 ---

 Key: HDFS-7324
 URL: https://issues.apache.org/jira/browse/HDFS-7324
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, tools
Affects Versions: 2.5.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.7.0

 Attachments: HDFS-7324.patch


 Scenario:
 ===
 Try the help command for hadadmin like following..
 Here usage is coming as DFSHAAdmin -ns, Ideally this not availble which we 
 can check following command.
 [root@linux156 bin]#  *{color:red}./hdfs haadmin{color}* 
 No GC_PROFILE is given. Defaults to medium.
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId [--forceactive]]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
  *{color:blue}[root@linux156 bin]# ./hdfs DFSHAAdmin -ns 100{color}*  
 Error: Could not find or load main class DFSHAAdmin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7805) NameNode recovery prompt should be printed on console

2015-02-16 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-7805:
-
Status: Patch Available  (was: Open)

 NameNode recovery prompt should be printed on console
 -

 Key: HDFS-7805
 URL: https://issues.apache.org/jira/browse/HDFS-7805
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
 Attachments: HDFS-7805.patch


 In my cluster root logger is not console, so when I run namenode recovery 
 tool MetaRecoveryContext.java prompt message is logged in log file.
 Actually is should be display on console.
 Currently it is like this
 {code}
 LOG.info(prompt);
 {code}
 It should be 
 {code}
 System.err.print(prompt);
 {code}
 NameNode recovery prompt should be printed on console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323043#comment-14323043
 ] 

Uma Maheswara Rao G commented on HDFS-7804:
---

exact usage information will come like below

Usage: haadmin .

why do we need to add hdfs also?

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Gerson Carlos (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerson Carlos updated HDFS-6662:

Attachment: hdfs-6662.003.patch

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323097#comment-14323097
 ] 

Hadoop QA commented on HDFS-7804:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699141/HDFS-7804-002.patch
  against trunk revision 814afa4.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9592//console

This message is automatically generated.

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7797) Add audit log for setQuota operation

2015-02-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323095#comment-14323095
 ] 

Rakesh R commented on HDFS-7797:


Thanks [~umamaheswararao] for the reviews. Yes, thats correct, I'll update the 
patch soon.

 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7797) Add audit log for setQuota operation

2015-02-16 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7797:
---
Attachment: 002-HDFS-7797.patch

 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7797.patch, 002-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323122#comment-14323122
 ] 

Hadoop QA commented on HDFS-7804:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699110/HDFS-7804.patch
  against trunk revision 447bd7b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9590//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9590//console

This message is automatically generated.

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7805) NameNode recovery prompt should be printed on console

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323133#comment-14323133
 ] 

Hadoop QA commented on HDFS-7805:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699135/HDFS-7805.patch
  against trunk revision 814afa4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9593//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9593//console

This message is automatically generated.

 NameNode recovery prompt should be printed on console
 -

 Key: HDFS-7805
 URL: https://issues.apache.org/jira/browse/HDFS-7805
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
 Attachments: HDFS-7805.patch


 In my cluster root logger is not console, so when I run namenode recovery 
 tool MetaRecoveryContext.java prompt message is logged in log file.
 Actually is should be display on console.
 Currently it is like this
 {code}
 LOG.info(prompt);
 {code}
 It should be 
 {code}
 System.err.print(prompt);
 {code}
 NameNode recovery prompt should be printed on console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-02-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323120#comment-14323120
 ] 

Zhe Zhang commented on HDFS-7285:
-

Per discussion above let's officially switch to the HDFS-7285 branch now. We 
have a nightly Jenkins job to monitor all incoming changes: 
https://builds.apache.org/job/Hadoop-HDFS-7285-nightly/

 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: ECAnalyzer.py, ECParser.py, 
 HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
 HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support different EC coding schemes, implementations and policies for 
 different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
 ISA-L library), an implementation can greatly improve the performance of EC 
 encoding/decoding and makes the EC solution even more attractive. We will 
 post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323241#comment-14323241
 ] 

Hadoop QA commented on HDFS-6662:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699134/hdfs-6662.003.patch
  against trunk revision 556386a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9591//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9591//console

This message is automatically generated.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7797) Add audit log for setQuota operation

2015-02-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323109#comment-14323109
 ] 

Rakesh R commented on HDFS-7797:


Attached new patch addressing [~umamaheswararao] comments. Please review.

 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7797.patch, 002-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-02-16 Thread Vincent.Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323141#comment-14323141
 ] 

Vincent.Wei commented on HDFS-7285:
---

I am will out of office for CN New Year  from 2.15-2.26 , I may reply e-mail 
slowly, please call me 13764370648 when there are urgent mater.


 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: ECAnalyzer.py, ECParser.py, 
 HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
 HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support different EC coding schemes, implementations and policies for 
 different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
 ISA-L library), an implementation can greatly improve the performance of EC 
 encoding/decoding and makes the EC solution even more attractive. We will 
 post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7604:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Jitendra, thank you again for the reviews.  I have committed this to trunk and 
branch-2.

 Track and display failed DataNode storage locations in NameNode.
 

 Key: HDFS-7604
 URL: https://issues.apache.org/jira/browse/HDFS-7604
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.0

 Attachments: HDFS-7604-screenshot-1.png, HDFS-7604-screenshot-2.png, 
 HDFS-7604-screenshot-3.png, HDFS-7604-screenshot-4.png, 
 HDFS-7604-screenshot-5.png, HDFS-7604-screenshot-6.png, 
 HDFS-7604-screenshot-7.png, HDFS-7604.001.patch, HDFS-7604.002.patch, 
 HDFS-7604.004.patch, HDFS-7604.005.patch, HDFS-7604.006.patch, 
 HDFS-7604.prototype.patch


 During heartbeats, the DataNode can report a list of its storage locations 
 that have been taken out of service due to failure (such as due to a bad disk 
 or a permissions problem).  The NameNode can track these failed storage 
 locations and then report them in JMX and the NameNode web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323429#comment-14323429
 ] 

Chris Nauroth commented on HDFS-7604:
-

The test failures in the last run appear to be unrelated.  I could not repro 
locally.

 Track and display failed DataNode storage locations in NameNode.
 

 Key: HDFS-7604
 URL: https://issues.apache.org/jira/browse/HDFS-7604
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.0

 Attachments: HDFS-7604-screenshot-1.png, HDFS-7604-screenshot-2.png, 
 HDFS-7604-screenshot-3.png, HDFS-7604-screenshot-4.png, 
 HDFS-7604-screenshot-5.png, HDFS-7604-screenshot-6.png, 
 HDFS-7604-screenshot-7.png, HDFS-7604.001.patch, HDFS-7604.002.patch, 
 HDFS-7604.004.patch, HDFS-7604.005.patch, HDFS-7604.006.patch, 
 HDFS-7604.prototype.patch


 During heartbeats, the DataNode can report a list of its storage locations 
 that have been taken out of service due to failure (such as due to a bad disk 
 or a permissions problem).  The NameNode can track these failed storage 
 locations and then report them in JMX and the NameNode web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-16 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7740:
-
Attachment: HDFS-7740.001.patch

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7740.001.patch


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-16 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7740:
-
Status: Patch Available  (was: Open)

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7740.001.patch


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-16 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323712#comment-14323712
 ] 

Yi Liu commented on HDFS-7740:
--

Sorry for the late update.
Add tests for the above 4 scenarios. To let this tests free control the 
datanodes number and don't affect other tests, I use separate MiniDFSCluster 
for them.

Some explanations to the 4 tests:
{quote}
Create file with 3 DNs up. Kill DN(0). Truncate file. Restart DN(0), make sure 
the old replica is disregarded and replaced with the truncated one.
{quote}
For non copy-on-truncate, the new (truncated) block id is the same, but the GS 
(GenerationStamp) should increase. In the test, I trigger block report for dn0 
after it restarts, since the GS of replica for the last block is old on dn0, so 
the reported last block from dn0 should be marked corrupt on nn and the 
replicas of last block should decrease 1 on nn, then the truncated block will 
be replicated to dn0. In the test, I check old replica (the block file and 
block metatdata file) is removed and replaced with the new (truncated) one.

{quote}
Kill DN(1). Truncate within the same last block with copy-on-truncate. Restart 
DN(1), verify replica consistency.
{quote}
For copy-on-truncate, new block is made with new block id and new GS. In the 
test, I trigger block report for dn1 after it restarts. The replicas of the new 
block is 2, and then it's replicated to dn1. In the test, I check new block 
file is replicated in dn1, and old replica exists too because there is snapshot.

{quote}
Create a single block file with 3 replicas. Truncate mid of block and then 
immediately restart 2 of the DNs. Check the files
{quote}
In the test, I restart dn0 and dn1 immediately after truncate, and check the 
old replica is removed and replaced with the truncated one on dn0 and dn1.

{quote}
Same as before except completely shutting down 3 of the DNs but not restarting 
them.
{quote}
In the test, I check the truncated block is always under construction after the 
3 datanodes shutdown.

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7740.001.patch


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-16 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323712#comment-14323712
 ] 

Yi Liu edited comment on HDFS-7740 at 2/17/15 6:32 AM:
---

Sorry for the late update.
Add tests for the above 4 scenarios. To let the tests free control the 
datanodes number and don't affect other tests, I use separate MiniDFSCluster 
for them.

Some explanations to the 4 tests:
{quote}
Create file with 3 DNs up. Kill DN(0). Truncate file. Restart DN(0), make sure 
the old replica is disregarded and replaced with the truncated one.
{quote}
For non copy-on-truncate, the new (truncated) block id is the same, but the GS 
(GenerationStamp) should increase. In the test, I trigger block report for dn0 
after it restarts, since the GS of replica for the last block is old on dn0, so 
the reported last block from dn0 should be marked corrupt on nn and the 
replicas of last block should decrease 1 on nn, then the truncated block will 
be replicated to dn0. In the test, I check old replica (the block file and 
block metatdata file) is removed and replaced with the new (truncated) one.

{quote}
Kill DN(1). Truncate within the same last block with copy-on-truncate. Restart 
DN(1), verify replica consistency.
{quote}
For copy-on-truncate, new block is made with new block id and new GS. In the 
test, I trigger block report for dn1 after it restarts. The replicas of the new 
block is 2, and then it's replicated to dn1. In the test, I check new block 
file is replicated in dn1, and old replica exists too because there is snapshot.

{quote}
Create a single block file with 3 replicas. Truncate mid of block and then 
immediately restart 2 of the DNs. Check the files
{quote}
In the test, I restart dn0 and dn1 immediately after truncate, and check the 
old replica is removed and replaced with the truncated one on dn0 and dn1.

{quote}
Same as before except completely shutting down 3 of the DNs but not restarting 
them.
{quote}
In the test, I check the truncated block is always under construction after the 
3 datanodes shutdown.


was (Author: hitliuyi):
Sorry for the late update.
Add tests for the above 4 scenarios. To let this tests free control the 
datanodes number and don't affect other tests, I use separate MiniDFSCluster 
for them.

Some explanations to the 4 tests:
{quote}
Create file with 3 DNs up. Kill DN(0). Truncate file. Restart DN(0), make sure 
the old replica is disregarded and replaced with the truncated one.
{quote}
For non copy-on-truncate, the new (truncated) block id is the same, but the GS 
(GenerationStamp) should increase. In the test, I trigger block report for dn0 
after it restarts, since the GS of replica for the last block is old on dn0, so 
the reported last block from dn0 should be marked corrupt on nn and the 
replicas of last block should decrease 1 on nn, then the truncated block will 
be replicated to dn0. In the test, I check old replica (the block file and 
block metatdata file) is removed and replaced with the new (truncated) one.

{quote}
Kill DN(1). Truncate within the same last block with copy-on-truncate. Restart 
DN(1), verify replica consistency.
{quote}
For copy-on-truncate, new block is made with new block id and new GS. In the 
test, I trigger block report for dn1 after it restarts. The replicas of the new 
block is 2, and then it's replicated to dn1. In the test, I check new block 
file is replicated in dn1, and old replica exists too because there is snapshot.

{quote}
Create a single block file with 3 replicas. Truncate mid of block and then 
immediately restart 2 of the DNs. Check the files
{quote}
In the test, I restart dn0 and dn1 immediately after truncate, and check the 
old replica is removed and replaced with the truncated one on dn0 and dn1.

{quote}
Same as before except completely shutting down 3 of the DNs but not restarting 
them.
{quote}
In the test, I check the truncated block is always under construction after the 
3 datanodes shutdown.

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7740.001.patch


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7797) Add audit log for setQuota operation

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323339#comment-14323339
 ] 

Hadoop QA commented on HDFS-7797:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699146/002-HDFS-7797.patch
  against trunk revision 814afa4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9594//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9594//console

This message is automatically generated.

 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7797.patch, 002-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323782#comment-14323782
 ] 

Hadoop QA commented on HDFS-5356:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699203/HDFS-5356-3.patch
  against trunk revision 9729b24.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9596//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9596//console

This message is automatically generated.

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356-3.patch, 
 HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323435#comment-14323435
 ] 

Hudson commented on HDFS-7604:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7122 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7122/])
HDFS-7604. Track and display failed DataNode storage locations in NameNode. 
Contributed by Chris Nauroth. (cnauroth: rev 
9729b244de50322c2cc889c97c2ffb2b4675cf77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/VolumeFailureSummary.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/VolumeFailureInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 Track and display failed DataNode storage locations in NameNode.
 

[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323447#comment-14323447
 ] 

Akira AJISAKA commented on HDFS-6662:
-

+1 pending Jenkins. https://builds.apache.org/job/PreCommit-HDFS-Build/9595/

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323446#comment-14323446
 ] 

Akira AJISAKA commented on HDFS-6662:
-

The test failure looks unrelated to the patch.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-02-16 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323526#comment-14323526
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7285:
---

[~zhz], thanks for setting the Jenkins job!

 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: ECAnalyzer.py, ECParser.py, 
 HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
 HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support different EC coding schemes, implementations and policies for 
 different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
 ISA-L library), an implementation can greatly improve the performance of EC 
 encoding/decoding and makes the EC solution even more attractive. We will 
 post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7803) Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation

2015-02-16 Thread Arshad Mohammad (JIRA)
Arshad Mohammad created HDFS-7803:
-

 Summary: Wrong command mentioned in HDFSHighAvailabilityWithQJM 
documentation
 Key: HDFS-7803
 URL: https://issues.apache.org/jira/browse/HDFS-7803
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor


The command in the following section is mentioned wrongly. It should be hdfs 
namenode -initializeSharedEdits

HDFSHighAvailabilityWithQJM.html  Deployment details
{code}
If you are converting a non-HA NameNode to be HA, you should run the command 
hdfs -initializeSharedEdits, which will initialize the JournalNodes with the 
edits data from the local NameNode edits directories.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7800) Improve documentation for FileSystem.concat()

2015-02-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322715#comment-14322715
 ] 

Steve Loughran commented on HDFS-7800:
--

think is is all covered in the Filesystem spec, 
[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
 ; if there's something missing can it go there? 

 Improve documentation for FileSystem.concat()
 -

 Key: HDFS-7800
 URL: https://issues.apache.org/jira/browse/HDFS-7800
 Project: Hadoop HDFS
  Issue Type: Task
Affects Versions: 2.2.0, 2.6.0
Reporter: Steve Armstrong
 Attachments: HDFS-7800-1.patch


 This is a documentation request.
 [FileSystem.concat()|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html]
  says it will Concat existing files together. It seems to be a 
 Namenode-only operation though, mapping the data blocks into a single file. 
 This means:
 # The destination must exist
 # The destination must be non-empty
 # The destination must have it's last block exactly full
 # All but the last of the source files must have their last block full
 # All the source file will be deleted by this operation
 HDFS-6641 brought up some of these limitations, but was closed as not a 
 problem. I think the javadoc should be improved so it's clear this function 
 was never intended to work the same as a general purpose file concatenation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7803) Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation

2015-02-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322722#comment-14322722
 ] 

Brahma Reddy Battula commented on HDFS-7803:


Thanks for reporting this jira!!!..Please attach the patch for same..

 Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation
 

 Key: HDFS-7803
 URL: https://issues.apache.org/jira/browse/HDFS-7803
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor

 The command in the following section is mentioned wrongly. It should be hdfs 
 namenode -initializeSharedEdits
 HDFSHighAvailabilityWithQJM.html  Deployment details
 {code}
 If you are converting a non-HA NameNode to be HA, you should run the command 
 hdfs -initializeSharedEdits, which will initialize the JournalNodes with 
 the edits data from the local NameNode edits directories.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7804:
---
Description: 
 *Currently it's given like following* 
 *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
[-transitionToActive serviceId]
[-transitionToStandby serviceId]
[-failover [--forcefence] [--forceactive] serviceId serviceId]
[-getServiceState serviceId]
[-checkHealth serviceId]
[-help command]

 *Expected:* 

 *{color:green}hdfs hadmin{color}* 




  was:
 *Currently it's given like following* 
 *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
[-transitionToActive serviceId]
[-transitionToStandby serviceId]
[-failover [--forcefence] [--forceactive] serviceId serviceId]
[-getServiceState serviceId]
[-checkHealth serviceId]
[-help command]

Expected:

 *{color:green}hdfs hadmin{color}* 





 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-7804:
--

 Summary: haadmin command usage #HDFSHighAvailabilityWithQJM.html
 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


 *Currently it's given like following* 
 *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
[-transitionToActive serviceId]
[-transitionToStandby serviceId]
[-failover [--forcefence] [--forceactive] serviceId serviceId]
[-getServiceState serviceId]
[-checkHealth serviceId]
[-help command]

Expected:

 *{color:green}hdfs hadmin{color}* 






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323592#comment-14323592
 ] 

Hadoop QA commented on HDFS-6662:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699134/hdfs-6662.003.patch
  against trunk revision 9729b24.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.web.TestWebHdfsTokens

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9595//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9595//console

This message is automatically generated.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323628#comment-14323628
 ] 

Rakesh R commented on HDFS-5356:


I could see these two test cases are passing locally. Reattaching the previous 
patch to get QA report again.

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356-3.patch, 
 HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-16 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-5356:
---
Attachment: HDFS-5356-3.patch

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356-3.patch, 
 HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable

2015-02-16 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-7780:
-
Attachment: HDFS-7780.003.patch

Remove unused import

 Update use of Iterator to Iterable
 --

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable

2015-02-16 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-7780:
-
Status: Patch Available  (was: Open)

 Update use of Iterator to Iterable
 --

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7324) haadmin command usage prints incorrect command name

2015-02-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323757#comment-14323757
 ] 

Brahma Reddy Battula commented on HDFS-7324:


Thanks a lot [~umamaheswararao]

 haadmin command usage prints incorrect command name
 ---

 Key: HDFS-7324
 URL: https://issues.apache.org/jira/browse/HDFS-7324
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, tools
Affects Versions: 2.5.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.7.0

 Attachments: HDFS-7324.patch


 Scenario:
 ===
 Try the help command for hadadmin like following..
 Here usage is coming as DFSHAAdmin -ns, Ideally this not availble which we 
 can check following command.
 [root@linux156 bin]#  *{color:red}./hdfs haadmin{color}* 
 No GC_PROFILE is given. Defaults to medium.
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId [--forceactive]]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
  *{color:blue}[root@linux156 bin]# ./hdfs DFSHAAdmin -ns 100{color}*  
 Error: Could not find or load main class DFSHAAdmin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7804:
---
Attachment: HDFS-7804.patch

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7804:
---
Status: Patch Available  (was: Open)

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Gerson Carlos (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322849#comment-14322849
 ] 

Gerson Carlos commented on HDFS-6662:
-

Thanks for noticing it. I'll update the patch.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-16 Thread Gerson Carlos (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322846#comment-14322846
 ] 

Gerson Carlos commented on HDFS-6662:
-

Hi, yeah, I'll upload a new version of the patch with the changes soon.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7648) Verify the datanode directory layout

2015-02-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322530#comment-14322530
 ] 

Hadoop QA commented on HDFS-7648:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699031/HDFS-7648-4.patch
  against trunk revision ab0b958.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9588//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9588//console

This message is automatically generated.

 Verify the datanode directory layout
 

 Key: HDFS-7648
 URL: https://issues.apache.org/jira/browse/HDFS-7648
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Rakesh R
 Attachments: HDFS-7648-3.patch, HDFS-7648-4.patch, HDFS-7648.patch, 
 HDFS-7648.patch


 HDFS-6482 changed datanode layout to use block ID to determine the directory 
 to store the block.  We should have some mechanism to verify it.  Either 
 DirectoryScanner or block report generation could do the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7397) The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading

2015-02-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-7397:
--

Assignee: Brahma Reddy Battula

 The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading
 

 Key: HDFS-7397
 URL: https://issues.apache.org/jira/browse/HDFS-7397
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Minor

 For dfs.client.read.shortcircuit.streams.cache.size, is it in MB or KB?  
 Interestingly, it is neither in MB nor KB.  It is the number of shortcircuit 
 streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7648) Verify the datanode directory layout

2015-02-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322540#comment-14322540
 ] 

Rakesh R commented on HDFS-7648:


[~cmccabe] kindly review the patch. Thanks!
I could see existing test case is covering the newly added log statement, so 
not included any specific tests.

 Verify the datanode directory layout
 

 Key: HDFS-7648
 URL: https://issues.apache.org/jira/browse/HDFS-7648
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Rakesh R
 Attachments: HDFS-7648-3.patch, HDFS-7648-4.patch, HDFS-7648.patch, 
 HDFS-7648.patch


 HDFS-6482 changed datanode layout to use block ID to determine the directory 
 to store the block.  We should have some mechanism to verify it.  Either 
 DirectoryScanner or block report generation could do the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7397) The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading

2015-02-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322548#comment-14322548
 ] 

Brahma Reddy Battula commented on HDFS-7397:


Hi [~szetszwo] 

can we change like dfs.client.read.shortcircuit.streams.cache.num/count..?

I feel, following also we can check, Please correct me If I am wrong.
{code}
public static final String  DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY = 
dfs.client.socketcache.capacity;
public static final int DFS_CLIENT_SOCKET_CACHE_CAPACITY_DEFAULT = 16;
{code}

 The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading
 

 Key: HDFS-7397
 URL: https://issues.apache.org/jira/browse/HDFS-7397
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Minor

 For dfs.client.read.shortcircuit.streams.cache.size, is it in MB or KB?  
 Interestingly, it is neither in MB nor KB.  It is the number of shortcircuit 
 streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2015-02-16 Thread Srikanth Upputuri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322555#comment-14322555
 ] 

Srikanth Upputuri commented on HDFS-6962:
-

[~cnauroth], after reading your comment above I have studied the relevant code 
and this is what I think. 

The umask should be loaded and applied on the server, depending on whether the 
parent directory has default acls or not. Only if default acls do not exist, 
umask will be applied to the mode. For mode, client will either pass the source 
permissions(cp, put, copyFromLocal) or the default permissions if no source 
permissions exist(create, mkdir etc). Currently the client code wrongly applies 
the mask to the permissions before making RPC calls. This happens at several 
places and this needs to be changed. 

For the copyFromLocal command, I have compared the behavior with 'cp' on Linux 
local file system. The resultant permissions of the destination file are 
determined by the parent directory's default permissions and the source file's 
permissions (mode). The umask is used only when the parent directory doesn't 
have default permissions. This is just like create api, except that in case of 
'create', the mode takes default value (0666).
The second RPC to 'setPermission' is only used when 'preserve attributes' 
option -p is used and permissions/ACLs are expected to be retained and in this 
case umask is not required. So, the only change 'copyFromLocal' may require is 
pass the the source file's permissions as mode, without masking.

Compatibility: Older clients applying the mask before passing the mode to 
server will retain their existing behavior if the parent directory has default 
permissions. In case the parent directory does not have default permissions, 
the mask gets applied one more time on the server without causing any change to 
the permissions. So, effectively the clients see the same behavior as existing.

I am attaching a prototype patch, please take a look. I will add tests later 
once the approach is validated.

 ACLs inheritance conflict with umaskmode
 

 Key: HDFS-6962
 URL: https://issues.apache.org/jira/browse/HDFS-6962
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
 Environment: CentOS release 6.5 (Final)
Reporter: LINTE
Assignee: Srikanth Upputuri
  Labels: hadoop, security

 In hdfs-site.xml 
 property
 namedfs.umaskmode/name
 value027/value
 /property
 1/ Create a directory as superuser
 bash# hdfs dfs -mkdir  /tmp/ACLS
 2/ set default ACLs on this directory rwx access for group readwrite and user 
 toto
 bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
 bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
 3/ check ACLs /tmp/ACLS/
 bash# hdfs dfs -getfacl /tmp/ACLS/
 # file: /tmp/ACLS
 # owner: hdfs
 # group: hadoop
 user::rwx
 group::r-x
 other::---
 default:user::rwx
 default:user:toto:rwx
 default:group::r-x
 default:group:readwrite:rwx
 default:mask::rwx
 default:other::---
 user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
 hdfs-site.xml, everything ok !
 default:group:readwrite:rwx allow readwrite group with rwx access for 
 inhéritance.
 default:user:toto:rwx allow toto user with rwx access for inhéritance.
 default:mask::rwx inhéritance mask is rwx, so no mask
 4/ Create a subdir to test inheritance of ACL
 bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
 5/ check ACLs /tmp/ACLS/hdfs
 bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
 # file: /tmp/ACLS/hdfs
 # owner: hdfs
 # group: hadoop
 user::rwx
 user:toto:rwx   #effective:r-x
 group::r-x
 group:readwrite:rwx #effective:r-x
 mask::r-x
 other::---
 default:user::rwx
 default:user:toto:rwx
 default:group::r-x
 default:group:readwrite:rwx
 default:mask::rwx
 default:other::---
 Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
 because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
 is set to default:mask::rwx on /tmp/ACLS/
 6/ Modifiy hdfs-site.xml et restart namenode
 property
 namedfs.umaskmode/name
 value010/value
 /property
 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
 bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
 8/ Check ACL on /tmp/ACLS/hdfs2
 bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
 # file: /tmp/ACLS/hdfs2
 # owner: hdfs
 # group: hadoop
 user::rwx
 user:toto:rwx   #effective:rw-
 group::r-x  #effective:r--
 group:readwrite:rwx #effective:rw-
 mask::rw-
 other::---
 default:user::rwx
 default:user:toto:rwx
 default:group::r-x
 default:group:readwrite:rwx
 default:mask::rwx
 default:other::---
 So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
 owner -- ) with the group mask of dfs.umaskmode properties when creating 

[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2015-02-16 Thread Srikanth Upputuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Upputuri updated HDFS-6962:

Attachment: HDFS-6962.1.patch

 ACLs inheritance conflict with umaskmode
 

 Key: HDFS-6962
 URL: https://issues.apache.org/jira/browse/HDFS-6962
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
 Environment: CentOS release 6.5 (Final)
Reporter: LINTE
Assignee: Srikanth Upputuri
  Labels: hadoop, security
 Attachments: HDFS-6962.1.patch


 In hdfs-site.xml 
 property
 namedfs.umaskmode/name
 value027/value
 /property
 1/ Create a directory as superuser
 bash# hdfs dfs -mkdir  /tmp/ACLS
 2/ set default ACLs on this directory rwx access for group readwrite and user 
 toto
 bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
 bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
 3/ check ACLs /tmp/ACLS/
 bash# hdfs dfs -getfacl /tmp/ACLS/
 # file: /tmp/ACLS
 # owner: hdfs
 # group: hadoop
 user::rwx
 group::r-x
 other::---
 default:user::rwx
 default:user:toto:rwx
 default:group::r-x
 default:group:readwrite:rwx
 default:mask::rwx
 default:other::---
 user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
 hdfs-site.xml, everything ok !
 default:group:readwrite:rwx allow readwrite group with rwx access for 
 inhéritance.
 default:user:toto:rwx allow toto user with rwx access for inhéritance.
 default:mask::rwx inhéritance mask is rwx, so no mask
 4/ Create a subdir to test inheritance of ACL
 bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
 5/ check ACLs /tmp/ACLS/hdfs
 bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
 # file: /tmp/ACLS/hdfs
 # owner: hdfs
 # group: hadoop
 user::rwx
 user:toto:rwx   #effective:r-x
 group::r-x
 group:readwrite:rwx #effective:r-x
 mask::r-x
 other::---
 default:user::rwx
 default:user:toto:rwx
 default:group::r-x
 default:group:readwrite:rwx
 default:mask::rwx
 default:other::---
 Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
 because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
 is set to default:mask::rwx on /tmp/ACLS/
 6/ Modifiy hdfs-site.xml et restart namenode
 property
 namedfs.umaskmode/name
 value010/value
 /property
 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
 bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
 8/ Check ACL on /tmp/ACLS/hdfs2
 bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
 # file: /tmp/ACLS/hdfs2
 # owner: hdfs
 # group: hadoop
 user::rwx
 user:toto:rwx   #effective:rw-
 group::r-x  #effective:r--
 group:readwrite:rwx #effective:rw-
 mask::rw-
 other::---
 default:user::rwx
 default:user:toto:rwx
 default:group::r-x
 default:group:readwrite:rwx
 default:mask::rwx
 default:other::---
 So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
 owner -- ) with the group mask of dfs.umaskmode properties when creating 
 directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7802) Reach xceiver limit once the watcherThread die

2015-02-16 Thread Liang Xie (JIRA)
Liang Xie created HDFS-7802:
---

 Summary: Reach xceiver limit once the watcherThread die
 Key: HDFS-7802
 URL: https://issues.apache.org/jira/browse/HDFS-7802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Critical


Our product cluster hit the Xceiver limit even w/ HADOOP-10404  HADOOP-11333,  
i found it was caused by DomainSocketWatcher.watcherThread gone. Attached is a 
possible fix, please review, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7802) Reach xceiver limit once the watcherThread die

2015-02-16 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie resolved HDFS-7802.
-
Resolution: Duplicate

sorry, close this jira, seems it should be in HADOOP catalogy not HDFS...

 Reach xceiver limit once the watcherThread die
 --

 Key: HDFS-7802
 URL: https://issues.apache.org/jira/browse/HDFS-7802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Critical

 Our product cluster hit the Xceiver limit even w/ HADOOP-10404  
 HADOOP-11333,  i found it was caused by DomainSocketWatcher.watcherThread 
 gone. Attached is a possible fix, please review, thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7785) Add detailed message for HttpPutFailedException

2015-02-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7785:
-
Status: Open  (was: Patch Available)

 Add detailed message for HttpPutFailedException
 ---

 Key: HDFS-7785
 URL: https://issues.apache.org/jira/browse/HDFS-7785
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
 Attachments: HDFS-7785.01.patch


 One of our namenode logs shows the following exception message.
 ...
 Caused by: 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpPutFailedException:
  org.apache.hadoop.security.authentication.util.SignerException: Invalid 
 signature
 at 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:294)
 ...
 {{HttpPutFailedException}} should have its detailed information, such as 
 status code and url, shown in the log to help debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7785) Add detailed message for HttpPutFailedException

2015-02-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7785:
-
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

 Add detailed message for HttpPutFailedException
 ---

 Key: HDFS-7785
 URL: https://issues.apache.org/jira/browse/HDFS-7785
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
 Attachments: HDFS-7785.01.patch


 One of our namenode logs shows the following exception message.
 ...
 Caused by: 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpPutFailedException:
  org.apache.hadoop.security.authentication.util.SignerException: Invalid 
 signature
 at 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:294)
 ...
 {{HttpPutFailedException}} should have its detailed information, such as 
 status code and url, shown in the log to help debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-16 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-5356:
---
Attachment: HDFS-5356-2.patch

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)