[jira] [Commented] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255591#comment-14255591
 ] 

Hadoop QA commented on HDFS-7561:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688615/HDFS-7561-001.txt
  against trunk revision ecf1469.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestSymlinkHdfsFileContext

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9106//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9106//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9106//console

This message is automatically generated.

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
 Attachments: HDFS-7561-001.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7562) Fix Atoi.cc link error

2014-12-22 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-7562:
---

 Summary: Fix Atoi.cc link error
 Key: HDFS-7562
 URL: https://issues.apache.org/jira/browse/HDFS-7562
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial


When compiling, following error occurs:
{noformat}
Undefined symbols for architecture x86_64:
hdfs::internal::StrToInt32(char const*, int*), referenced from:
hdfs::Config::getInt32(std::__1::basic_stringchar, 
std::__1::char_traitschar, std::__1::allocatorchar  const, int*) const in 
Config.cc.o
hdfs::Config::getInt32(std::__1::basic_stringchar, 
std::__1::char_traitschar, std::__1::allocatorchar  const, int, int*) 
const in Config.cc.o
hdfs::internal::StrToInt64(char const*, long long*), referenced from:
hdfs::Config::getInt64(std::__1::basic_stringchar, 
std::__1::char_traitschar, std::__1::allocatorchar  const, long long*) 
const in Config.cc.o
hdfs::Config::getInt64(std::__1::basic_stringchar, 
std::__1::char_traitschar, std::__1::allocatorchar  const, long long, 
long long*) const in Config.cc.o
hdfs::internal::StrToDouble(char const*, double*), referenced from:
hdfs::Config::getDouble(std::__1::basic_stringchar, 
std::__1::char_traitschar, std::__1::allocatorchar  const, double*) const 
in Config.cc.o
hdfs::Config::getDouble(std::__1::basic_stringchar, 
std::__1::char_traitschar, std::__1::allocatorchar  const, double, 
double*) const in Config.cc.o
hdfs::internal::StrToBool(char const*, bool*), referenced from:
hdfs::Config::getBool(std::__1::basic_stringchar, std::__1::char_traitschar, 
std::__1::allocatorchar  const, bool*) const in Config.cc.o
hdfs::Config::getBool(std::__1::basic_stringchar, std::__1::char_traitschar, 
std::__1::allocatorchar  const, bool, bool*) const in Config.cc.o
hdfs::internal::XmlData::handleData(void*, char const*, int) in 
XmlConfigParser.cc.o
ld: symbol(s) not found for architecture x86_64
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7562) Fix Atoi.cc link error

2014-12-22 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-7562:

Status: Patch Available  (was: Open)

 Fix Atoi.cc link error
 --

 Key: HDFS-7562
 URL: https://issues.apache.org/jira/browse/HDFS-7562
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial

 When compiling, following error occurs:
 {noformat}
 Undefined symbols for architecture x86_64:
 hdfs::internal::StrToInt32(char const*, int*), referenced from:
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int*) const 
 in Config.cc.o
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int, int*) 
 const in Config.cc.o
 hdfs::internal::StrToInt64(char const*, long long*), referenced from:
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long*) 
 const in Config.cc.o
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long, 
 long long*) const in Config.cc.o
 hdfs::internal::StrToDouble(char const*, double*), referenced from:
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double*) 
 const in Config.cc.o
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double, 
 double*) const in Config.cc.o
 hdfs::internal::StrToBool(char const*, bool*), referenced from:
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool*) const 
 in Config.cc.o
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool, bool*) 
 const in Config.cc.o
 hdfs::internal::XmlData::handleData(void*, char const*, int) in 
 XmlConfigParser.cc.o
 ld: symbol(s) not found for architecture x86_64
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7562) Fix Atoi.cc link error

2014-12-22 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-7562:

Attachment: HDFS-7562-pnatve.001.patch

 Fix Atoi.cc link error
 --

 Key: HDFS-7562
 URL: https://issues.apache.org/jira/browse/HDFS-7562
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-7562-pnatve.001.patch


 When compiling, following error occurs:
 {noformat}
 Undefined symbols for architecture x86_64:
 hdfs::internal::StrToInt32(char const*, int*), referenced from:
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int*) const 
 in Config.cc.o
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int, int*) 
 const in Config.cc.o
 hdfs::internal::StrToInt64(char const*, long long*), referenced from:
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long*) 
 const in Config.cc.o
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long, 
 long long*) const in Config.cc.o
 hdfs::internal::StrToDouble(char const*, double*), referenced from:
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double*) 
 const in Config.cc.o
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double, 
 double*) const in Config.cc.o
 hdfs::internal::StrToBool(char const*, bool*), referenced from:
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool*) const 
 in Config.cc.o
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool, bool*) 
 const in Config.cc.o
 hdfs::internal::XmlData::handleData(void*, char const*, int) in 
 XmlConfigParser.cc.o
 ld: symbol(s) not found for architecture x86_64
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7560) ACLs removed by removeDefaultAcl() will be back after NameNode restart/failover

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255610#comment-14255610
 ] 

Hadoop QA commented on HDFS-7560:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688620/HDFS-7560-002.patch
  against trunk revision ecf1469.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestSymlinkHdfsFileContext

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9107//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9107//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9107//console

This message is automatically generated.

 ACLs removed by removeDefaultAcl() will be back after NameNode 
 restart/failover
 ---

 Key: HDFS-7560
 URL: https://issues.apache.org/jira/browse/HDFS-7560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.1
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Attachments: HDFS-7560-001.patch, HDFS-7560-002.patch


 Default ACLs removed using {{removeDefaultAcl()}} will come back after 
 Namenode restart/switch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7562) Fix Atoi.cc link error

2014-12-22 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-7562:

Resolution: Duplicate
  Assignee: (was: Binglin Chang)
Status: Resolved  (was: Patch Available)

 Fix Atoi.cc link error
 --

 Key: HDFS-7562
 URL: https://issues.apache.org/jira/browse/HDFS-7562
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Binglin Chang
Priority: Trivial
 Attachments: HDFS-7562-pnatve.001.patch


 When compiling, following error occurs:
 {noformat}
 Undefined symbols for architecture x86_64:
 hdfs::internal::StrToInt32(char const*, int*), referenced from:
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int*) const 
 in Config.cc.o
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int, int*) 
 const in Config.cc.o
 hdfs::internal::StrToInt64(char const*, long long*), referenced from:
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long*) 
 const in Config.cc.o
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long, 
 long long*) const in Config.cc.o
 hdfs::internal::StrToDouble(char const*, double*), referenced from:
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double*) 
 const in Config.cc.o
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double, 
 double*) const in Config.cc.o
 hdfs::internal::StrToBool(char const*, bool*), referenced from:
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool*) const 
 in Config.cc.o
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool, bool*) 
 const in Config.cc.o
 hdfs::internal::XmlData::handleData(void*, char const*, int) in 
 XmlConfigParser.cc.o
 ld: symbol(s) not found for architecture x86_64
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7555) Remove the support of unmanaged connectors in HttpServer2

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255625#comment-14255625
 ] 

Hudson commented on HDFS-7555:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #49 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/49/])
HDFS-7555. Remove the support of unmanaged connectors in HttpServer2. 
Contributed by Haohui Mai. (wheat9: rev 
2860eeb14a958a8861b9ad3d6bd685df48da8cd3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


 Remove the support of unmanaged connectors in HttpServer2
 -

 Key: HDFS-7555
 URL: https://issues.apache.org/jira/browse/HDFS-7555
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7555.001.patch, HDFS-7555.002.patch, 
 HDFS-7555.003.patch


 After HDFS-7279 there is no need to support unmanaged connectors in 
 HttpServer2. This jira proposes to remove the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7557) Fix spacing for a few keys in DFSConfigKeys.java

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255626#comment-14255626
 ] 

Hudson commented on HDFS-7557:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #49 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/49/])
HDFS-7557. Fix spacing for a few keys in DFSConfigKeys.java (Colin P. McCabe) 
(yliu: rev 7bc0a6d5c2358616da41971f6d8daab17a958f27)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix spacing for a few keys in DFSConfigKeys.java
 

 Key: HDFS-7557
 URL: https://issues.apache.org/jira/browse/HDFS-7557
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7557.001.patch


 A few configuration keys in DFSConfigKeys.java are spaced with 3 spaces 
 rather than 2; let's fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7557) Fix spacing for a few keys in DFSConfigKeys.java

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255631#comment-14255631
 ] 

Hudson commented on HDFS-7557:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #783 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/783/])
HDFS-7557. Fix spacing for a few keys in DFSConfigKeys.java (Colin P. McCabe) 
(yliu: rev 7bc0a6d5c2358616da41971f6d8daab17a958f27)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix spacing for a few keys in DFSConfigKeys.java
 

 Key: HDFS-7557
 URL: https://issues.apache.org/jira/browse/HDFS-7557
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7557.001.patch


 A few configuration keys in DFSConfigKeys.java are spaced with 3 spaces 
 rather than 2; let's fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7555) Remove the support of unmanaged connectors in HttpServer2

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255630#comment-14255630
 ] 

Hudson commented on HDFS-7555:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #783 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/783/])
HDFS-7555. Remove the support of unmanaged connectors in HttpServer2. 
Contributed by Haohui Mai. (wheat9: rev 
2860eeb14a958a8861b9ad3d6bd685df48da8cd3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Remove the support of unmanaged connectors in HttpServer2
 -

 Key: HDFS-7555
 URL: https://issues.apache.org/jira/browse/HDFS-7555
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7555.001.patch, HDFS-7555.002.patch, 
 HDFS-7555.003.patch


 After HDFS-7279 there is no need to support unmanaged connectors in 
 HttpServer2. This jira proposes to remove the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255641#comment-14255641
 ] 

Hadoop QA commented on HDFS-6133:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688622/HDFS-6133-6.patch
  against trunk revision ecf1469.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestSymlinkHdfsFileContext

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9108//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9108//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9108//console

This message is automatically generated.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133-1.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, 
 HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7562) Fix Atoi.cc link error

2014-12-22 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255651#comment-14255651
 ] 

Binglin Chang commented on HDFS-7562:
-

HDFS-7018 already include this fix, close as duplicate 

 Fix Atoi.cc link error
 --

 Key: HDFS-7562
 URL: https://issues.apache.org/jira/browse/HDFS-7562
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Binglin Chang
Priority: Trivial
 Attachments: HDFS-7562-pnatve.001.patch


 When compiling, following error occurs:
 {noformat}
 Undefined symbols for architecture x86_64:
 hdfs::internal::StrToInt32(char const*, int*), referenced from:
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int*) const 
 in Config.cc.o
 hdfs::Config::getInt32(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, int, int*) 
 const in Config.cc.o
 hdfs::internal::StrToInt64(char const*, long long*), referenced from:
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long*) 
 const in Config.cc.o
 hdfs::Config::getInt64(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, long long, 
 long long*) const in Config.cc.o
 hdfs::internal::StrToDouble(char const*, double*), referenced from:
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double*) 
 const in Config.cc.o
 hdfs::Config::getDouble(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, double, 
 double*) const in Config.cc.o
 hdfs::internal::StrToBool(char const*, bool*), referenced from:
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool*) const 
 in Config.cc.o
 hdfs::Config::getBool(std::__1::basic_stringchar, 
 std::__1::char_traitschar, std::__1::allocatorchar  const, bool, bool*) 
 const in Config.cc.o
 hdfs::internal::XmlData::handleData(void*, char const*, int) in 
 XmlConfigParser.cc.o
 ld: symbol(s) not found for architecture x86_64
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2014-12-22 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255683#comment-14255683
 ] 

Binglin Chang commented on HDFS-6994:
-

Hi [~cmccabe] and [~wangzw]
I get some time to work on the current libhdfs3 code,  looks like all the code 
is under hdfs namespace(including code in common/network/rpc), those code is 
useful in native yarn client too(which is in the scope of HADOOP-10388), it is 
better to extract a common module, so hdfs and yarn can both depend on it, 
right?



 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7557) Fix spacing for a few keys in DFSConfigKeys.java

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255760#comment-14255760
 ] 

Hudson commented on HDFS-7557:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1981 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1981/])
HDFS-7557. Fix spacing for a few keys in DFSConfigKeys.java (Colin P. McCabe) 
(yliu: rev 7bc0a6d5c2358616da41971f6d8daab17a958f27)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix spacing for a few keys in DFSConfigKeys.java
 

 Key: HDFS-7557
 URL: https://issues.apache.org/jira/browse/HDFS-7557
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7557.001.patch


 A few configuration keys in DFSConfigKeys.java are spaced with 3 spaces 
 rather than 2; let's fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7555) Remove the support of unmanaged connectors in HttpServer2

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255759#comment-14255759
 ] 

Hudson commented on HDFS-7555:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1981 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1981/])
HDFS-7555. Remove the support of unmanaged connectors in HttpServer2. 
Contributed by Haohui Mai. (wheat9: rev 
2860eeb14a958a8861b9ad3d6bd685df48da8cd3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Remove the support of unmanaged connectors in HttpServer2
 -

 Key: HDFS-7555
 URL: https://issues.apache.org/jira/browse/HDFS-7555
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7555.001.patch, HDFS-7555.002.patch, 
 HDFS-7555.003.patch


 After HDFS-7279 there is no need to support unmanaged connectors in 
 HttpServer2. This jira proposes to remove the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7557) Fix spacing for a few keys in DFSConfigKeys.java

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255767#comment-14255767
 ] 

Hudson commented on HDFS-7557:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #46 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/46/])
HDFS-7557. Fix spacing for a few keys in DFSConfigKeys.java (Colin P. McCabe) 
(yliu: rev 7bc0a6d5c2358616da41971f6d8daab17a958f27)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


 Fix spacing for a few keys in DFSConfigKeys.java
 

 Key: HDFS-7557
 URL: https://issues.apache.org/jira/browse/HDFS-7557
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7557.001.patch


 A few configuration keys in DFSConfigKeys.java are spaced with 3 spaces 
 rather than 2; let's fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7555) Remove the support of unmanaged connectors in HttpServer2

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255766#comment-14255766
 ] 

Hudson commented on HDFS-7555:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #46 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/46/])
HDFS-7555. Remove the support of unmanaged connectors in HttpServer2. 
Contributed by Haohui Mai. (wheat9: rev 
2860eeb14a958a8861b9ad3d6bd685df48da8cd3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Remove the support of unmanaged connectors in HttpServer2
 -

 Key: HDFS-7555
 URL: https://issues.apache.org/jira/browse/HDFS-7555
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7555.001.patch, HDFS-7555.002.patch, 
 HDFS-7555.003.patch


 After HDFS-7279 there is no need to support unmanaged connectors in 
 HttpServer2. This jira proposes to remove the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7557) Fix spacing for a few keys in DFSConfigKeys.java

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255805#comment-14255805
 ] 

Hudson commented on HDFS-7557:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #50 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/50/])
HDFS-7557. Fix spacing for a few keys in DFSConfigKeys.java (Colin P. McCabe) 
(yliu: rev 7bc0a6d5c2358616da41971f6d8daab17a958f27)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


 Fix spacing for a few keys in DFSConfigKeys.java
 

 Key: HDFS-7557
 URL: https://issues.apache.org/jira/browse/HDFS-7557
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7557.001.patch


 A few configuration keys in DFSConfigKeys.java are spaced with 3 spaces 
 rather than 2; let's fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7555) Remove the support of unmanaged connectors in HttpServer2

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255804#comment-14255804
 ] 

Hudson commented on HDFS-7555:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #50 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/50/])
HDFS-7555. Remove the support of unmanaged connectors in HttpServer2. 
Contributed by Haohui Mai. (wheat9: rev 
2860eeb14a958a8861b9ad3d6bd685df48da8cd3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


 Remove the support of unmanaged connectors in HttpServer2
 -

 Key: HDFS-7555
 URL: https://issues.apache.org/jira/browse/HDFS-7555
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7555.001.patch, HDFS-7555.002.patch, 
 HDFS-7555.003.patch


 After HDFS-7279 there is no need to support unmanaged connectors in 
 HttpServer2. This jira proposes to remove the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7555) Remove the support of unmanaged connectors in HttpServer2

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255834#comment-14255834
 ] 

Hudson commented on HDFS-7555:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2000 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2000/])
HDFS-7555. Remove the support of unmanaged connectors in HttpServer2. 
Contributed by Haohui Mai. (wheat9: rev 
2860eeb14a958a8861b9ad3d6bd685df48da8cd3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Remove the support of unmanaged connectors in HttpServer2
 -

 Key: HDFS-7555
 URL: https://issues.apache.org/jira/browse/HDFS-7555
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7555.001.patch, HDFS-7555.002.patch, 
 HDFS-7555.003.patch


 After HDFS-7279 there is no need to support unmanaged connectors in 
 HttpServer2. This jira proposes to remove the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7557) Fix spacing for a few keys in DFSConfigKeys.java

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255835#comment-14255835
 ] 

Hudson commented on HDFS-7557:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2000 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2000/])
HDFS-7557. Fix spacing for a few keys in DFSConfigKeys.java (Colin P. McCabe) 
(yliu: rev 7bc0a6d5c2358616da41971f6d8daab17a958f27)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix spacing for a few keys in DFSConfigKeys.java
 

 Key: HDFS-7557
 URL: https://issues.apache.org/jira/browse/HDFS-7557
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7557.001.patch


 A few configuration keys in DFSConfigKeys.java are spaced with 3 spaces 
 rather than 2; let's fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)
Hari Sekhon created HDFS-7563:
-

 Summary: NFS gateway parseStaticMap NumberFormatException
 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon


When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
that my Windows 7 workstation at this bank is passing UID number 4294xx but 
entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
prevents the NFS gateway from restarting with the error message:

{code}Exception in thread main java.lang.NumberFormatException: For input 
string: 4294xx
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:495)
at java.lang.Integer.parseInt(Integer.java:527)
at 
org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
at 
org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
at 
org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
at 
org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
{code}
The /etc/nfs.map file simply contains
{code}
uid 4294xx 1
{code}
It seems that the code at 
{code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
is expecting an integer at line 318 of the parseStaticMap method: {code}int 
remoteId = Integer.parseInt(lineMatcher.group(2));
int localId = Integer.parseInt(lineMatcher.group(3));{code}

This UID does seem very high to me but it has worked successfully on a MapR-FS 
NFS share and stores files created with that UID over NFS.

The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
using Long to accomodate this, I've attached a patch for the parsing and 
UID/GID HashMaps.

Regards,

Hari Sekhon
http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Attachment: UID_GID_Long_HashMaps.patch

Patch for Int = Long UID/GID mapping HashMaps

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Assignee: Aaron T. Myers
  Status: Patch Available  (was: Open)

Quick patch, not tested since I don't have the build infrastructure

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255856#comment-14255856
 ] 

Hadoop QA commented on HDFS-7563:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12688658/UID_GID_Long_HashMaps.patch
  against trunk revision a696fbb.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9109//console

This message is automatically generated.

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7564) NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map

2014-12-22 Thread Hari Sekhon (JIRA)
Hari Sekhon created HDFS-7564:
-

 Summary: NFS gateway dynamically reload UID/GID mapping file 
/etc/nfs.map
 Key: HDFS-7564
 URL: https://issues.apache.org/jira/browse/HDFS-7564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Priority: Minor


Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
(default for static.id.mapping.file).

It seems that this is currently only loaded upon restart of the NFS gateway 
would cause active clients to hang or fail.

Regards,

Hari Sekhon
http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Attachment: (was: UID_GID_Long_HashMaps.patch)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers

 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Attachment: UID_GID_Long_HashMaps.patch

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Status: Patch Available  (was: Open)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Status: Open  (was: Patch Available)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Status: In Progress  (was: Patch Available)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Attachment: (was: UID_GID_Long_HashMaps.patch)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Attachment: UID_GID_Long_HashMaps.patch

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Status: Patch Available  (was: In Progress)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Hari Sekhon
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon reassigned HDFS-7563:
-

Assignee: Hari Sekhon  (was: Aaron T. Myers)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Hari Sekhon
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7563:
--
Assignee: Aaron T. Myers  (was: Hari Sekhon)

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7565) NFS gateway UID overflow

2014-12-22 Thread Hari Sekhon (JIRA)
Hari Sekhon created HDFS-7565:
-

 Summary: NFS gateway UID overflow
 Key: HDFS-7565
 URL: https://issues.apache.org/jira/browse/HDFS-7565
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon


It appears that my Windows 7 workstation is passing a UID in the 4 billion 
range and parts of the code are using java int's so it looks like the UID is 
overflowing and giving -2 :
{code}security.ShellBasedIdMapping (ShellBasedIdMapping.java:getUserName(358)) 
- Can't find user name for uid -2. Use default user name nobody{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7565) NFS gateway UID overflow

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7565:
--
Description: 
It appears that my Windows 7 workstation is passing a UID around 4 billion to 
the NFS gateway and the getUserName() method is being passed -2, so it looks 
like the UID is an int and is overflowing:
{code}security.ShellBasedIdMapping (ShellBasedIdMapping.java:getUserName(358)) 
- Can't find user name for uid -2. Use default user name nobody{code}

  was:
It appears that my Windows 7 workstation is passing a UID in the 4 billion 
range and parts of the code are using java int's so it looks like the UID is 
overflowing and giving -2 :
{code}security.ShellBasedIdMapping (ShellBasedIdMapping.java:getUserName(358)) 
- Can't find user name for uid -2. Use default user name nobody{code}


 NFS gateway UID overflow
 

 Key: HDFS-7565
 URL: https://issues.apache.org/jira/browse/HDFS-7565
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon

 It appears that my Windows 7 workstation is passing a UID around 4 billion to 
 the NFS gateway and the getUserName() method is being passed -2, so it 
 looks like the UID is an int and is overflowing:
 {code}security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:getUserName(358)) - Can't find user name for uid 
 -2. Use default user name nobody{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7564) NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map

2014-12-22 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reassigned HDFS-7564:
---

Assignee: Yongjun Zhang

 NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map
 

 Key: HDFS-7564
 URL: https://issues.apache.org/jira/browse/HDFS-7564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Yongjun Zhang
Priority: Minor

 Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
 (default for static.id.mapping.file).
 It seems that this is currently only loaded upon restart of the NFS gateway 
 would cause active clients to hang or fail.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7564) NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map

2014-12-22 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255901#comment-14255901
 ] 

Yongjun Zhang commented on HDFS-7564:
-

HI Hari,

Thanks for reporting the issue. Would you please provide some more details 
about how the hang/fail relates to the loading of the map file? What's the 
symptom? is there any specific entry in the map file that's causing trouble? 
and some logs to would indicate that if there is any? Thanks.



 NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map
 

 Key: HDFS-7564
 URL: https://issues.apache.org/jira/browse/HDFS-7564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Yongjun Zhang
Priority: Minor

 Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
 (default for static.id.mapping.file).
 It seems that this is currently only loaded upon restart of the NFS gateway 
 would cause active clients to hang or fail.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255907#comment-14255907
 ] 

Hadoop QA commented on HDFS-7563:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12688665/UID_GID_Long_HashMaps.patch
  against trunk revision a696fbb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9111//console

This message is automatically generated.

 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7564) NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map

2014-12-22 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255911#comment-14255911
 ] 

Hari Sekhon commented on HDFS-7564:
---

It's not causing any hang... what I'm saying is that currently the UID/GID 
mapping file seems to only be loaded at NFS gateway startup... whereas it would 
be good to put a poll on the file and re-read + re-process it on changes say 
once every minute or on a SIGHUP or something.

At the moment haivng to restart the NFS gateway to pick up changes would result 
in the NFS mount points on clients hanging.

 NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map
 

 Key: HDFS-7564
 URL: https://issues.apache.org/jira/browse/HDFS-7564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Yongjun Zhang
Priority: Minor

 Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
 (default for static.id.mapping.file).
 It seems that this is currently only loaded upon restart of the NFS gateway 
 would cause active clients to hang or fail.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7564) NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map

2014-12-22 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated HDFS-7564:
--
Description: 
Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
(default for static.id.mapping.file).

It seems that the mappings file is currently only read upon restart of the NFS 
gateway which would cause any active clients NFS mount points to hang or fail.

Regards,

Hari Sekhon
http://www.linkedin.com/in/harisekhon

  was:
Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
(default for static.id.mapping.file).

It seems that this is currently only loaded upon restart of the NFS gateway 
would cause active clients to hang or fail.

Regards,

Hari Sekhon
http://www.linkedin.com/in/harisekhon


 NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map
 

 Key: HDFS-7564
 URL: https://issues.apache.org/jira/browse/HDFS-7564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Yongjun Zhang
Priority: Minor

 Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
 (default for static.id.mapping.file).
 It seems that the mappings file is currently only read upon restart of the 
 NFS gateway which would cause any active clients NFS mount points to hang or 
 fail.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7564) NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map

2014-12-22 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255929#comment-14255929
 ] 

Yongjun Zhang commented on HDFS-7564:
-

Thanks for the clarification Hari. 


 NFS gateway dynamically reload UID/GID mapping file /etc/nfs.map
 

 Key: HDFS-7564
 URL: https://issues.apache.org/jira/browse/HDFS-7564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Yongjun Zhang
Priority: Minor

 Add dynamic reload of the NFS gateway UID/GID mappings file /etc/nfs.map 
 (default for static.id.mapping.file).
 It seems that the mappings file is currently only read upon restart of the 
 NFS gateway which would cause any active clients NFS mount points to hang or 
 fail.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7563) NFS gateway parseStaticMap NumberFormatException

2014-12-22 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255971#comment-14255971
 ] 

Yongjun Zhang commented on HDFS-7563:
-

Hi Hari,

I think HDFS-6361 fixed the issue. Would you please verify whether you have 
HDFS-6361 fix? thanks.


 NFS gateway parseStaticMap NumberFormatException
 

 Key: HDFS-7563
 URL: https://issues.apache.org/jira/browse/HDFS-7563
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: Aaron T. Myers
 Attachments: UID_GID_Long_HashMaps.patch


 When using the new NFS UID mapping for the HDFS NFS gateway I've discovered 
 that my Windows 7 workstation at this bank is passing UID number 4294xx 
 but entering this in the /etc/nfs.map in order to remap that to a Hadoop UID 
 prevents the NFS gateway from restarting with the error message:
 {code}Exception in thread main java.lang.NumberFormatException: For input 
 string: 4294xx
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:495)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.parseStaticMap(ShellBasedIdMapping.java:318)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.updateMaps(ShellBasedIdMapping.java:229)
 at 
 org.apache.hadoop.security.ShellBasedIdMapping.init(ShellBasedIdMapping.java:91)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
 {code}
 The /etc/nfs.map file simply contains
 {code}
 uid 4294xx 1
 {code}
 It seems that the code at 
 {code}hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java{code}
 is expecting an integer at line 318 of the parseStaticMap method: {code}int 
 remoteId = Integer.parseInt(lineMatcher.group(2));
 int localId = Integer.parseInt(lineMatcher.group(3));{code}
 This UID does seem very high to me but it has worked successfully on a 
 MapR-FS NFS share and stores files created with that UID over NFS.
 The UID / GID mappings for the HDFS NFS gateway will need to be switched to 
 using Long to accomodate this, I've attached a patch for the parsing and 
 UID/GID HashMaps.
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7484) Simplify the workflow of calculating permission in mkdirs()

2014-12-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7484:

Attachment: HDFS-7484.008.patch

 Simplify the workflow of calculating permission in mkdirs()
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch


 {{FSDirMkdirsOp#mkdirsRecursively()}} currently calculates the permissions 
 based on whether {{inheritPermission}} is true. This jira proposes to 
 simplify the workflow and make it explicit for the caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256056#comment-14256056
 ] 

Konstantin Shvachko commented on HDFS-7561:
---

This does fix the problem when you run the test from command line. But if I run 
it in debugger it still creates fetched-image-dir under hadoop-hdfs.

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
 Attachments: HDFS-7561-001.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7484) Simplify the workflow of calculating permission in mkdirs()

2014-12-22 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256057#comment-14256057
 ] 

Jing Zhao commented on HDFS-7484:
-

Now the patch's mainly focusing on making {{FSDirectory#addINode}} only take 
existing INodes as its parameter (instead of an INodesInPath instance 
containing null elements). I will change the jira description and title 
accordingly.

 Simplify the workflow of calculating permission in mkdirs()
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch


 {{FSDirMkdirsOp#mkdirsRecursively()}} currently calculates the permissions 
 based on whether {{inheritPermission}} is true. This jira proposes to 
 simplify the workflow and make it explicit for the caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7484:

Summary: Make FSDirectory#addINode take existing INodes as its parameter  
(was: Simplify the workflow of calculating permission in mkdirs())

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch


 {{FSDirMkdirsOp#mkdirsRecursively()}} currently calculates the permissions 
 based on whether {{inheritPermission}} is true. This jira proposes to 
 simplify the workflow and make it explicit for the caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7484:

Description: Currently {{FSDirectory#addLastINode}} takes an INodesInPath 
instance, which uses null elements to indicate INodes to create, as its 
parameter. This logic makes the logic of {{mkdir}} complicated. It will be 
better to let {{addLastINode}}'s INodesInPath parameter only contain existing 
INodes.  (was: {{FSDirMkdirsOp#mkdirsRecursively()}} currently calculates the 
permissions based on whether {{inheritPermission}} is true. This jira proposes 
to simplify the workflow and make it explicit for the caller.)

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7539) Namenode can't leave safemode because of Datanodes' IPC socket timeout

2014-12-22 Thread hoelog (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256233#comment-14256233
 ] 

hoelog commented on HDFS-7539:
--

NN's jvm parameter for memory is -Xmx39G but that seems not enough to load the 
fsimage without GC. DNs continue to reconnect but they are also time-out 
because the default timeout is 1 minute. These events repeat for a while and 
then NN don't accept any connect trial to itself.

 Namenode can't leave safemode because of Datanodes' IPC socket timeout
 --

 Key: HDFS-7539
 URL: https://issues.apache.org/jira/browse/HDFS-7539
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.5.1
 Environment: 1 master, 1 secondary and 128 slaves, each node has x24 
 cores, 48GB memory. fsimage is 4GB.
Reporter: hoelog

 During the starting of namenode, data nodes seem waiting namenode's response 
 through IPC to register block pools.
 here is DN's log -
 {code} 
 2014-12-16 20:28:09,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Acknowledging ACTIVE Namenode Block pool 
 BP-877672386-10.114.130.143-1412666752827 (Datanode Uuid 
 2117395f-e034-4b4a-adec-8a28464f4796) service to NN.x.com/10.x.x143:9000 
 {code}
 But namenode is too busy to responde it, and datanodes occur socket timeout - 
 default is 1 minute.
 {code}
 2014-12-16 20:29:09,857 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in offerService
 java.net.SocketTimeoutException: Call From DN1.x.com/10.x.x.84 to 
 NN.x.com:9000 failed on socket timeout exception: 
 java.net.SocketTimeoutException: 6 millis timeout while waiting for 
 channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
 local=/10.x.x.84:57924 remote=NN.x.com/10.x.x.143:9000]; For more details 
 see:  http://wiki.apache.org/hadoop/SocketTimeout 
 {code}
 same events repeat and eventually NN drops most connecting trials from DN. So 
 NN can't leave safemode.
 DN's log -
 {code}
 2014-12-16 20:32:25,895 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in offerService
 java.io.IOException: failed on local exception java.io.ioexception connection 
 reset by peer
 {code}
 There is no troubles in the network, configuration or servers. I think NN is 
 too busy to respond to DN in a minute. 
 I configured ipc.ping.interval to 15 mins In the core-site.xml, and that 
 was helpful for my cluster. 
 {code}
 property
   nameipc.ping.interval/name
   value90/value
 /property
 {code}
 In my cluster, namenode responded 1 min ~ 5 mins for the DNs' request.
 It will be helpful if there is more elegant solution.
 {code}
 2014-12-16 23:28:16,598 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Acknowledging ACTIVE Namenode Block pool 
 BP-877672386-10.x.x.143-1412666752827 (Datanode Uuid 
 c4f7beea-b8e9-404f-bc81-6e87e37263d2) service to NN/10.x.x.143:9000
 2014-12-16 23:31:32,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Sent 1 blockreports 2090961 blocks total. Took 1690 msec to generate and 
 193738 msecs for RPC and NN processing.  Got back commands 
 org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@20e68e11
 2014-12-16 23:31:32,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Got finalize command for block pool BP-877672386-10.x.x.143-1412666752827
 2014-12-16 23:31:32,032 INFO org.apache.hadoop.util.GSet: Computing capacity 
 for map BlockMap
 2014-12-16 23:31:32,032 INFO org.apache.hadoop.util.GSet: VM type   = 
 64-bit
 2014-12-16 23:31:32,044 INFO org.apache.hadoop.util.GSet: 0.5% max memory 3.6 
 GB = 18.2 MB
 2014-12-16 23:31:32,045 INFO org.apache.hadoop.util.GSet: capacity  = 
 2^21 = 2097152 entries
 2014-12-16 23:31:32,046 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block 
 Verification Scanner initialized with interval 504 hours for block pool 
 BP-877672386-10.114.130.143-1412666752827
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7560) ACLs removed by removeDefaultAcl() will be back after NameNode restart/failover

2014-12-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7560:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I committed it to trunk and branch-2.  Nice find, Vinay.  
Thank you!

 ACLs removed by removeDefaultAcl() will be back after NameNode 
 restart/failover
 ---

 Key: HDFS-7560
 URL: https://issues.apache.org/jira/browse/HDFS-7560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.1
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7560-001.patch, HDFS-7560-002.patch


 Default ACLs removed using {{removeDefaultAcl()}} will come back after 
 Namenode restart/switch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7539) Namenode can't leave safemode because of Datanodes' IPC socket timeout

2014-12-22 Thread hoelog (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256249#comment-14256249
 ] 

hoelog commented on HDFS-7539:
--

Yep, DNs continue to reconnect, but eventually NN don't accept any connection. 
I waited for very long hours but NN couldn't leave safemode even once. After 
setting ipc.ping.interval, only 15 minutes is needed to restart hdfs.

 Namenode can't leave safemode because of Datanodes' IPC socket timeout
 --

 Key: HDFS-7539
 URL: https://issues.apache.org/jira/browse/HDFS-7539
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.5.1
 Environment: 1 master, 1 secondary and 128 slaves, each node has x24 
 cores, 48GB memory. fsimage is 4GB.
Reporter: hoelog

 During the starting of namenode, data nodes seem waiting namenode's response 
 through IPC to register block pools.
 here is DN's log -
 {code} 
 2014-12-16 20:28:09,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Acknowledging ACTIVE Namenode Block pool 
 BP-877672386-10.114.130.143-1412666752827 (Datanode Uuid 
 2117395f-e034-4b4a-adec-8a28464f4796) service to NN.x.com/10.x.x143:9000 
 {code}
 But namenode is too busy to responde it, and datanodes occur socket timeout - 
 default is 1 minute.
 {code}
 2014-12-16 20:29:09,857 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in offerService
 java.net.SocketTimeoutException: Call From DN1.x.com/10.x.x.84 to 
 NN.x.com:9000 failed on socket timeout exception: 
 java.net.SocketTimeoutException: 6 millis timeout while waiting for 
 channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
 local=/10.x.x.84:57924 remote=NN.x.com/10.x.x.143:9000]; For more details 
 see:  http://wiki.apache.org/hadoop/SocketTimeout 
 {code}
 same events repeat and eventually NN drops most connecting trials from DN. So 
 NN can't leave safemode.
 DN's log -
 {code}
 2014-12-16 20:32:25,895 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in offerService
 java.io.IOException: failed on local exception java.io.ioexception connection 
 reset by peer
 {code}
 There is no troubles in the network, configuration or servers. I think NN is 
 too busy to respond to DN in a minute. 
 I configured ipc.ping.interval to 15 mins In the core-site.xml, and that 
 was helpful for my cluster. 
 {code}
 property
   nameipc.ping.interval/name
   value90/value
 /property
 {code}
 In my cluster, namenode responded 1 min ~ 5 mins for the DNs' request.
 It will be helpful if there is more elegant solution.
 {code}
 2014-12-16 23:28:16,598 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Acknowledging ACTIVE Namenode Block pool 
 BP-877672386-10.x.x.143-1412666752827 (Datanode Uuid 
 c4f7beea-b8e9-404f-bc81-6e87e37263d2) service to NN/10.x.x.143:9000
 2014-12-16 23:31:32,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Sent 1 blockreports 2090961 blocks total. Took 1690 msec to generate and 
 193738 msecs for RPC and NN processing.  Got back commands 
 org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@20e68e11
 2014-12-16 23:31:32,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Got finalize command for block pool BP-877672386-10.x.x.143-1412666752827
 2014-12-16 23:31:32,032 INFO org.apache.hadoop.util.GSet: Computing capacity 
 for map BlockMap
 2014-12-16 23:31:32,032 INFO org.apache.hadoop.util.GSet: VM type   = 
 64-bit
 2014-12-16 23:31:32,044 INFO org.apache.hadoop.util.GSet: 0.5% max memory 3.6 
 GB = 18.2 MB
 2014-12-16 23:31:32,045 INFO org.apache.hadoop.util.GSet: capacity  = 
 2^21 = 2097152 entries
 2014-12-16 23:31:32,046 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block 
 Verification Scanner initialized with interval 504 hours for block pool 
 BP-877672386-10.114.130.143-1412666752827
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7560) ACLs removed by removeDefaultAcl() will be back after NameNode restart/failover

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256252#comment-14256252
 ] 

Hudson commented on HDFS-7560:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #6775 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6775/])
HDFS-7560. ACLs removed by removeDefaultAcl() will be back after NameNode 
restart/failover. Contributed by Vinayakumar B. (cnauroth: rev 
2cf90a2c338497a466bbad9e83966033bf14bfb7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 ACLs removed by removeDefaultAcl() will be back after NameNode 
 restart/failover
 ---

 Key: HDFS-7560
 URL: https://issues.apache.org/jira/browse/HDFS-7560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.1
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7560-001.patch, HDFS-7560-002.patch


 Default ACLs removed using {{removeDefaultAcl()}} will come back after 
 Namenode restart/switch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7565) NFS gateway UID overflow

2014-12-22 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256259#comment-14256259
 ] 

Yongjun Zhang commented on HDFS-7565:
-

Hi Hari, I think this is the same as HDFS-6361 (and HDFS-7563). Suggest to get 
HDFS-6361 fix and try.  Thanks.



 NFS gateway UID overflow
 

 Key: HDFS-7565
 URL: https://issues.apache.org/jira/browse/HDFS-7565
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon

 It appears that my Windows 7 workstation is passing a UID around 4 billion to 
 the NFS gateway and the getUserName() method is being passed -2, so it 
 looks like the UID is an int and is overflowing:
 {code}security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:getUserName(358)) - Can't find user name for uid 
 -2. Use default user name nobody{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256275#comment-14256275
 ] 

Hadoop QA commented on HDFS-7484:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688703/HDFS-7484.008.patch
  against trunk revision a696fbb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverControllerStress
  org.apache.hadoop.hdfs.TestDFSMkdirs
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9113//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9113//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9113//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9113//console

This message is automatically generated.

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7484:

Attachment: HDFS-7484.009.patch

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch, HDFS-7484.009.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2014-12-22 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256350#comment-14256350
 ] 

Konstantin Boudnik commented on HDFS-7056:
--

bq.  I would have if there were three or more values or a potential to have 
more. But its exactly two. You are right we cannot change recoverLease() now. 
So it is better to have the same pattern in truncate() to avoid even more 
confusion why this is done differently in two cases.

I tend to agree with [~shv] here: a sudden introduction a new contract's 
fashion will be more confusing. Besides, enforcing a enum for just two possible 
return values sounds excessive and unnecessary. It'd be totally acceptable if 
the method could return say seven different states.

bq. This actually raised a question for me how it will work with rolling 
upgrades. Thinking about it.
Shall we address the rolling upgrade issue in a separate ticket? It seems that 
dragging this much longer will have a significant impact on the patch 
maintenance: we already see multiple iterations of the same patch just because 
of some minor changes in the trunk.

 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HDFS-3107-HDFS-7056-combined-13.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-7056-13.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFSSnapshotWithTruncateDesign.docx


 Implementation of truncate in HDFS-3107 does not allow truncating files which 
 are in a snapshot. It is desirable to be able to truncate and still keep the 
 old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7456:

Target Version/s: 2.7.0
Hadoop Flags: Reviewed

In the v008 patch, I spotted one minor issue.  There is a test assertion with a 
message about doubling the count, but really it's tripling the count:

{code}
assertEquals(ReferenceCount After Restart should be doubled,
before * 3, aclFeature.getRefCount());
{code}

I'm +1 for the patch after correcting that and getting one more Jenkins run.  
Vinay, please feel free to commit after that.

I also did some additional manual testing on this.  I set a default ACL on a 
directory.  Then, I created 100 sub-directories and 100 files in that 
directory.  As expected, after a full GC, the heap histogram showed 201 
instances of {{AclFeature}}.  I then applied the patch, restarted the NameNode, 
triggered a full GC, and checked the heap histogram again.  This time, I saw 
only 3 instances of {{AclFeature}}: 1 for the first directory, 1 shared across 
all child directories, and 1 shared across all child files.

Nice work, Vinay!  Thank you.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2014-12-22 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256374#comment-14256374
 ] 

Jing Zhao commented on HDFS-7056:
-

Some further comments:
# In {{FileDiffList#combineAndCollectSnapshotBlocks}}, why do we pass {{null}} 
for {{collectBlocksAndClear}}? In case that the file has been deleted and we're 
deleting the last snapshot, we need to delete the whole INode thus should pass 
a non-null INode List to {{collectBlocksAndClear}}. We should also have unit 
tests for this if we can confirm this is an issue.
{code}
+BlockInfo[] removedBlocks = removed.getBlocks();
+if(removedBlocks == null) {
+  FileWithSnapshotFeature sf = file.getFileWithSnapshotFeature();
+  assert sf != null : FileWithSnapshotFeature is null;
+  if(sf.isCurrentFileDeleted())
+sf.collectBlocksAndClear(file, collectedBlocks, null);
+  return;
+}
{code}
# The semantic of {{findEarlierSnapshotBlocks}} is not easy to follow. Looks 
like when {{snapshotId}} is {{Snapshot.CURRENT_STATE_ID}} the function selects 
an exclusive semantic, while for others the function chooses an inclusive 
semantic. We need to make it more consistent here. Also it can be optimized 
similarly as {{findLaterSnapshotBlocks}}.
# Minor: in the current patch {{findLaterSnapshotBlocks}} is always coupled 
with an extra null check and if it returns null {{getBlocks}} is called. This 
extra check/call can actually be combined into {{findLaterSnapshotBlocks}}.
{code}
+snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
+return (snapshotBlocks == null) ? getBlocks() : snapshotBlocks;
{code}
# bq. Also bumped up the NameNode layout version.
Could you generally explain why we need to bump up the NN layout version here? 
The protobuf-based fsimage should be able to handle its compatibility. One 
benefit for bumping up layoutversion can be preventing rolling downgrade so 
that an fsimage produced by snapshot+truncate cannot be directly loaded by 
the old version jar.

 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HDFS-3107-HDFS-7056-combined-13.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-7056-13.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFSSnapshotWithTruncateDesign.docx


 Implementation of truncate in HDFS-3107 does not allow truncating files which 
 are in a snapshot. It is desirable to be able to truncate and still keep the 
 old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3107) HDFS truncate

2014-12-22 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-3107:
--
Attachment: HDFS-3107-14.patch

Fixed findbugs warning.
Updated to current trunk.

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256381#comment-14256381
 ] 

Akira AJISAKA commented on HDFS-7561:
-

Hi [~shv], how about the below change?
{code}
- System.getProperty(build.test.dir), fetched-image-dir);
+ System.getProperty(test.build.dir), target/fetched-image-dir);
{code}

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
 Attachments: HDFS-7561-001.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7521) Refactor DN state management

2014-12-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256395#comment-14256395
 ] 

Zhe Zhang commented on HDFS-7521:
-

Thanks for checking [~mingma]. In this case the current state machine design 
looks good to me. I look forward to seeing this work moving forward!

 Refactor DN state management
 

 Key: HDFS-7521
 URL: https://issues.apache.org/jira/browse/HDFS-7521
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
 Attachments: DNStateMachines.png, HDFS-7521.patch


 There are two aspects w.r.t. DN state management in NN.
 * State machine management within active NN
 NN maintains states of each data node regarding whether it is running or 
 being decommissioned. But the state machine isn’t well defined. We have dealt 
 with some corner case bug in this area. It will be useful if we can refactor 
 the code to use clear state machine definition that define events, available 
 states and actions for state transitions. It has these benefits.
 ** Make it easy to define correctness of DN state management. Currently some 
 of the state transitions aren't defined in the code. For example, if admins 
 remove a node from include host file while the node is being decommissioned, 
 it will be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
 intention. If we have state machine definition, we can identify this case.
 ** Make it easy to add new state for DN later. For example, people discussed 
 about new “maintenance” state for DN to support the scenario where admins 
 need to take the machine/rack down for 30 minutes for repair.
 We can refactor DN with clear state machine definition based on YARN state 
 related components.
 * State machine consistency between active and standby NN
 Another dimension of state machine management is consistency across NN pairs. 
 We have dealt with bugs due to different live nodes between active NN and 
 standby NN. Current design is to have each NN manage its own state based on 
 the events it receives. For example, DNs will send heartbeat to both NNs; 
 admins will issue decommission commands to both NNs. Alternative design 
 approach could be to have ZK manage the state.
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7556) HardLink.java should use the jdk7 createLink method

2014-12-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-7556:
---

Assignee: Akira AJISAKA

 HardLink.java should use the jdk7 createLink method
 ---

 Key: HDFS-7556
 URL: https://issues.apache.org/jira/browse/HDFS-7556
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA

 Now that we are using jdk7, HardLink.java should use the jdk7 createLink 
 method rather than our shell commands or JNI methods.
 Note that we cannot remove all of the JNI / shell commands unless we remove 
 the code which is checking the link count, something that jdk7 doesn't 
 provide (at least, I don't think it does)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256472#comment-14256472
 ] 

Konstantin Shvachko commented on HDFS-7561:
---

Sounds right.

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
 Attachments: HDFS-7561-001.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256474#comment-14256474
 ] 

Akira AJISAKA commented on HDFS-7561:
-

Thanks. Hi [~xieliang007], would you update the patch to include the above fix?
I'm +1 if that is addressed.

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
 Attachments: HDFS-7561-001.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7556) HardLink.java should use the jdk7 createLink method

2014-12-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7556:

Attachment: HDFS-7556-001.patch

Attaching a patch.

 HardLink.java should use the jdk7 createLink method
 ---

 Key: HDFS-7556
 URL: https://issues.apache.org/jira/browse/HDFS-7556
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
 Attachments: HDFS-7556-001.patch


 Now that we are using jdk7, HardLink.java should use the jdk7 createLink 
 method rather than our shell commands or JNI methods.
 Note that we cannot remove all of the JNI / shell commands unless we remove 
 the code which is checking the link count, something that jdk7 doesn't 
 provide (at least, I don't think it does)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7556) HardLink.java should use the jdk7 createLink method

2014-12-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7556:

Status: Patch Available  (was: Open)

 HardLink.java should use the jdk7 createLink method
 ---

 Key: HDFS-7556
 URL: https://issues.apache.org/jira/browse/HDFS-7556
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
 Attachments: HDFS-7556-001.patch


 Now that we are using jdk7, HardLink.java should use the jdk7 createLink 
 method rather than our shell commands or JNI methods.
 Note that we cannot remove all of the JNI / shell commands unless we remove 
 the code which is checking the link count, something that jdk7 doesn't 
 provide (at least, I don't think it does)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256528#comment-14256528
 ] 

Hadoop QA commented on HDFS-7484:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688738/HDFS-7484.009.patch
  against trunk revision 2cf90a2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9114//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9114//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9114//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9114//console

This message is automatically generated.

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch, HDFS-7484.009.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

Attachment: HDFS-7456-009.patch

Thanks [~cnauroth] for the additional manual verification. Here is the updated 
patch.
After jenkins run I will commit this.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch, 
 HDFS-7456-009.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7560) ACLs removed by removeDefaultAcl() will be back after NameNode restart/failover

2014-12-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256564#comment-14256564
 ] 

Vinayakumar B commented on HDFS-7560:
-

Thanks a lot [~cnauroth] for the review and commit.

 ACLs removed by removeDefaultAcl() will be back after NameNode 
 restart/failover
 ---

 Key: HDFS-7560
 URL: https://issues.apache.org/jira/browse/HDFS-7560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.1
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7560-001.patch, HDFS-7560-002.patch


 Default ACLs removed using {{removeDefaultAcl()}} will come back after 
 Namenode restart/switch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7556) HardLink.java should use the jdk7 createLink method

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256568#comment-14256568
 ] 

Hadoop QA commented on HDFS-7556:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688775/HDFS-7556-001.patch
  against trunk revision fdf042d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9115//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9115//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9115//console

This message is automatically generated.

 HardLink.java should use the jdk7 createLink method
 ---

 Key: HDFS-7556
 URL: https://issues.apache.org/jira/browse/HDFS-7556
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
 Attachments: HDFS-7556-001.patch


 Now that we are using jdk7, HardLink.java should use the jdk7 createLink 
 method rather than our shell commands or JNI methods.
 Note that we cannot remove all of the JNI / shell commands unless we remove 
 the code which is checking the link count, something that jdk7 doesn't 
 provide (at least, I don't think it does)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2014-12-22 Thread Ray Chiang (JIRA)
Ray Chiang created HDFS-7566:


 Summary: Remove obsolete entries from hdfs-default.xml
 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang


So far, I've found these five properties which may be obsolete in 
hdfs-default.xml:

- dfs.https.enable
- dfs.namenode.edits.journal-plugin.qjournal
- dfs.namenode.logging.level
- dfs.ha.namenodes.EXAMPLENAMESERVICE
  + Should this be kept in the .xml file?
- dfs.support.append
  + Removed with HDFS-6246

I'd like to get feedback about the state of any of the above properties.

This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256676#comment-14256676
 ] 

Hadoop QA commented on HDFS-7456:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12688786/HDFS-7456-009.patch
  against trunk revision fdf042d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestSymlinkHdfsFileContext

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9116//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9116//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9116//console

This message is automatically generated.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch, 
 HDFS-7456-009.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256678#comment-14256678
 ] 

Vinayakumar B commented on HDFS-7456:
-

Failures and findbugs are not related to patch.
I will commit the patch soon.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch, 
 HDFS-7456-009.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256685#comment-14256685
 ] 

Haohui Mai commented on HDFS-7484:
--

Thanks for working on this. The patch simplifies the code of {{mkdirs()}} and 
{{rename()}}, making them much easier to follow. +1.

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch, HDFS-7484.009.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256686#comment-14256686
 ] 

Vinayakumar B commented on HDFS-7456:
-

Thanks [~cnauroth] for all the reviews and wonderful suggestions.
Committed v009 patch to trunk and branch-2.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch, 
 HDFS-7456-009.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch, 
 HDFS-7456-009.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256691#comment-14256691
 ] 

Hudson commented on HDFS-7456:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6778/])
HDFS-7456. De-duplicate AclFeature instances with same AclEntries do reduce 
memory footprint of NameNode (Contributed by Vinayakumar B) (vinayakumarb: rev 
50ae1a6664a92619aa683d2a864d0da9fb4af026)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestAclWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeAcl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSAcl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/ReferenceCountMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileContextAcl.java


 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch, HDFS-7456-005.patch, 
 HDFS-7456-006.patch, HDFS-7456-007.patch, HDFS-7456-008.patch, 
 HDFS-7456-009.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7484:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Haohui for the review. I've committed this to trunk and branch-2.

 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch, HDFS-7484.009.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7484) Make FSDirectory#addINode take existing INodes as its parameter

2014-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256695#comment-14256695
 ] 

Hudson commented on HDFS-7484:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6779 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6779/])
HDFS-7484. Make FSDirectory#addINode take existing INodes as its parameter. 
Contributed by Jing Zhao. (jing9: rev 5caebbae8c2fc9ba2e32384657aee21641a1a6d0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/SymlinkBaseTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Make FSDirectory#addINode take existing INodes as its parameter
 ---

 Key: HDFS-7484
 URL: https://issues.apache.org/jira/browse/HDFS-7484
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7484.000.patch, HDFS-7484.001.patch, 
 HDFS-7484.002.patch, HDFS-7484.003.patch, HDFS-7484.004.patch, 
 HDFS-7484.005.patch, HDFS-7484.006.patch, HDFS-7484.007.patch, 
 HDFS-7484.008.patch, HDFS-7484.009.patch


 Currently {{FSDirectory#addLastINode}} takes an INodesInPath instance, which 
 uses null elements to indicate INodes to create, as its parameter. This logic 
 makes the logic of {{mkdir}} complicated. It will be better to let 
 {{addLastINode}}'s INodesInPath parameter only contain existing INodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7556) HardLink.java should use the jdk7 createLink method

2014-12-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14256704#comment-14256704
 ] 

Akira AJISAKA commented on HDFS-7556:
-

These warnings are unrelated to the patch.

 HardLink.java should use the jdk7 createLink method
 ---

 Key: HDFS-7556
 URL: https://issues.apache.org/jira/browse/HDFS-7556
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
 Attachments: HDFS-7556-001.patch


 Now that we are using jdk7, HardLink.java should use the jdk7 createLink 
 method rather than our shell commands or JNI methods.
 Note that we cannot remove all of the JNI / shell commands unless we remove 
 the code which is checking the link count, something that jdk7 doesn't 
 provide (at least, I don't think it does)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-7561:

Attachment: HDFS-7561-002.txt

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
 Attachments: HDFS-7561-001.txt, HDFS-7561-002.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7561) TestFetchImage should write fetched-image-dir under target.

2014-12-22 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie reassigned HDFS-7561:
---

Assignee: Liang Xie

 TestFetchImage should write fetched-image-dir under target.
 ---

 Key: HDFS-7561
 URL: https://issues.apache.org/jira/browse/HDFS-7561
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Liang Xie
 Attachments: HDFS-7561-001.txt, HDFS-7561-002.txt


 {{TestFetchImage}} creates directory {{fetched-image-dir}} under hadoop-hdfs, 
 which is then never cleaned up. The problem is that it uses build.test.dir 
 property, which seems to be invalid. Probably should use 
 {{MiniDFSCluster.getBaseDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)