[jira] [Commented] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100311#comment-14100311
 ] 

Akira AJISAKA commented on HADOOP-10972:


Thanks [~PKRoma] for the report and the patch.

 Native Libraries Guide contains mis-spelt build line
 

 Key: HADOOP-10972
 URL: https://issues.apache.org/jira/browse/HADOOP-10972
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
  Labels: documentation, newbie
 Fix For: 3.0.0

 Attachments: HADOOP-10972.patch


 The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
 't' in the build line. The correct build line is:
 {code:none}
 $ mvn package -Pdist,native -DskipTests -Dtar
 {code}
 Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10972:
---

Fix Version/s: (was: 3.0.0)
 Target Version/s: 3.0.0, 2.6.0  (was: 3.0.0)
Affects Version/s: 2.3.0
   Status: Patch Available  (was: Open)

 Native Libraries Guide contains mis-spelt build line
 

 Key: HADOOP-10972
 URL: https://issues.apache.org/jira/browse/HADOOP-10972
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0, 3.0.0
Reporter: Peter Klavins
  Labels: documentation, newbie
 Attachments: HADOOP-10972.patch


 The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
 't' in the build line. The correct build line is:
 {code:none}
 $ mvn package -Pdist,native -DskipTests -Dtar
 {code}
 Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100312#comment-14100312
 ] 

Akira AJISAKA commented on HADOOP-10972:


Looks good to me, +1 (non-binding) pending Jenkins.

 Native Libraries Guide contains mis-spelt build line
 

 Key: HADOOP-10972
 URL: https://issues.apache.org/jira/browse/HADOOP-10972
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.3.0
Reporter: Peter Klavins
  Labels: documentation, newbie
 Attachments: HADOOP-10972.patch


 The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
 't' in the build line. The correct build line is:
 {code:none}
 $ mvn package -Pdist,native -DskipTests -Dtar
 {code}
 Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Attachment: HADOOP-9913.3.patch

Thanks [~aw] and [~ozawa] for the comments. Rebased the patch.

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
 HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Target Version/s: 3.0.0, 2.6.0  (was: 3.0.0)
  Status: Patch Available  (was: Open)

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 2.1.0-beta, 3.0.0
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
 HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2014-08-18 Thread Kazuki OIKAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuki OIKAWA updated HADOOP-10948:
---

Fix Version/s: 3.0.0
Affects Version/s: (was: 2.4.1)
   Status: Patch Available  (was: Open)

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
 Fix For: 3.0.0

 Attachments: HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100407#comment-14100407
 ] 

Hadoop QA commented on HADOOP-10972:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662373/HADOOP-10972.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4495//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4495//console

This message is automatically generated.

 Native Libraries Guide contains mis-spelt build line
 

 Key: HADOOP-10972
 URL: https://issues.apache.org/jira/browse/HADOOP-10972
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.3.0
Reporter: Peter Klavins
  Labels: documentation, newbie
 Attachments: HADOOP-10972.patch


 The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
 't' in the build line. The correct build line is:
 {code:none}
 $ mvn package -Pdist,native -DskipTests -Dtar
 {code}
 Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100412#comment-14100412
 ] 

Hadoop QA commented on HADOOP-9913:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662433/HADOOP-9913.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestDecayRpcScheduler

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4496//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4496//console

This message is automatically generated.

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
 HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2014-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100429#comment-14100429
 ] 

Hadoop QA commented on HADOOP-10948:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12660621/HADOOP-10948.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1262 javac 
compiler warnings (more than the trunk's current 1260 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-openstack.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4497//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4497//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-openstack.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4497//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4497//console

This message is automatically generated.

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
 Fix For: 3.0.0

 Attachments: HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2014-08-18 Thread Kazuki OIKAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuki OIKAWA updated HADOOP-10948:
---

Attachment: HADOOP-10948-2.patch

fixed javac warning

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
 Fix For: 3.0.0

 Attachments: HADOOP-10948-2.patch, HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2014-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100476#comment-14100476
 ] 

Hadoop QA commented on HADOOP-10948:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662455/HADOOP-10948-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-openstack.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4498//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4498//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-openstack.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4498//console

This message is automatically generated.

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
 Fix For: 3.0.0

 Attachments: HADOOP-10948-2.patch, HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10650) Add ability to specify a reverse ACL (black list) of users and groups

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100527#comment-14100527
 ] 

Hudson commented on HADOOP-10650:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #650 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/650/])
HADOOP-10650. Add ability to specify a reverse ACL (black list) of users and 
groups. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618482)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestServiceAuthorization.java


 Add ability to specify a reverse ACL (black list) of users and groups
 -

 Key: HADOOP-10650
 URL: https://issues.apache.org/jira/browse/HADOOP-10650
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10650.patch, HADOOP-10650.patch, 
 HADOOP-10650.patch


 Currently , it is possible to define a ACL (user and groups) for a service. 
 To temporarily remove authorization for a set of users, administrator needs 
 to remove the users from the specific group and this may be a lengthy process 
 ( update ldap groups, flush caches on machines).
  If there is a facility to define a reverse ACL for services, then 
 administrator can disable users by specifying the users in reverse ACL. In 
 other words, one can specify a whitelist of users and groups as well as a 
 blacklist of users and groups. 
 One can also specify a default blacklist to disable the users from accessing 
 any service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10335) An ip whilelist based implementation to resolve Sasl properties per connection

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100528#comment-14100528
 ] 

Hudson commented on HADOOP-10335:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #650 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/650/])
HADOOP-10335. An ip whilelist based implementation to resolve Sasl properties 
per connection. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618503)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
HADOOP-10335. Undo checkin to resolve test build issue. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618487)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
HADOOP-10335. An ip whilelist based implementation to resolve Sasl properties 
per connection. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618484)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java


 An ip whilelist based implementation to resolve Sasl properties per connection
 --

 Key: HADOOP-10335
 URL: https://issues.apache.org/jira/browse/HADOOP-10335
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10335.patch, HADOOP-10335.patch, 
 HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf


 As noted in HADOOP-10221, it is 

[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100729#comment-14100729
 ] 

Kihwal Lee commented on HADOOP-10893:
-

The latest patch looks good. One thing I am not sure about is leaving the conf 
in mapred-default.xml.  Since the default system classes are in the code, the 
entry in mapred-site.xml kind of serves as documentation, which needs to be 
kept in sync with the code. From pure functionality point of view, we can 
simply remove them, but then those who want to modify the list need to look at 
the code?  Maybe we can have MR logs show list of system classes when the app 
class loader is activated. Then it might be easier for users to figure out the 
default/current list and modify. Any other thoughts?

 isolated classloader on the client side
 ---

 Key: HADOOP-10893
 URL: https://issues.apache.org/jira/browse/HADOOP-10893
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.4.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
 HADOOP-10893.patch, HADOOP-10893.patch, classloader-test.tar.gz


 We have the job classloader on the mapreduce tasks that run on the cluster. 
 It has a benefit of being able to isolate class space for user code and avoid 
 version clashes.
 Although it occurs less often, version clashes do occur on the client JVM. It 
 would be good to introduce an isolated classloader on the client side as well 
 to address this. A natural point to introduce this may be through RunJar, as 
 that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10307) Support multiple Authentication mechanisms for HTTP

2014-08-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100737#comment-14100737
 ] 

Alejandro Abdelnur commented on HADOOP-10307:
-

Looking at the patch, it duplicates part of the logic done in HADOOP-9054 
(using the user-agent), if we do that, then we should built on HADOOP-9054, not 
duplicating the code. Also, personally, I don't like the idea of query string 
param to indicate the auth scheme to use.

The other day, talking with [~daryn], he suggested that to support multiple 
authentication schemes we could multiple WWW-Authenticate headers. I was not 
aware this was possible, I've checked around and it is possible. You can either 
have multiple schemes in the same WWW-Authenticate header (parsing that is a 
bit tricky and HttpClient does not support it yet (HTTPCLIENT-1489). Or you can 
have multiple WWW-Authenticate headers.

IMO, we should do as [~daryn] suggested me offline, add support for multiple 
authentication schemes. The AuthenticationFilter would have to support a list 
of AuthenticationHandlers, when a requests comes in and it is not authenticated 
(because of a cookie and because no handler found authentication info in it), 
then the response should include the challenges of all AuthenticationHandlers. 
Then the client should choose the strongest one it supports.


 Support multiple Authentication mechanisms for HTTP
 ---

 Key: HADOOP-10307
 URL: https://issues.apache.org/jira/browse/HADOOP-10307
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10307.patch, HADOOP-10307.patch, 
 HADOOP-10307.patch


 Currently it is possible to specify a custom Authentication Handler  for HTTP 
 authentication.  
 We have a requirement to support multiple mechanisms  to authenticate HTTP 
 access.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10335) An ip whilelist based implementation to resolve Sasl properties per connection

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100765#comment-14100765
 ] 

Hudson commented on HADOOP-10335:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1841 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1841/])
HADOOP-10335. An ip whilelist based implementation to resolve Sasl properties 
per connection. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618503)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
HADOOP-10335. Undo checkin to resolve test build issue. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618487)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
HADOOP-10335. An ip whilelist based implementation to resolve Sasl properties 
per connection. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618484)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java


 An ip whilelist based implementation to resolve Sasl properties per connection
 --

 Key: HADOOP-10335
 URL: https://issues.apache.org/jira/browse/HADOOP-10335
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10335.patch, HADOOP-10335.patch, 
 HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf


 As noted in HADOOP-10221, it is 

[jira] [Commented] (HADOOP-10650) Add ability to specify a reverse ACL (black list) of users and groups

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100764#comment-14100764
 ] 

Hudson commented on HADOOP-10650:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1841 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1841/])
HADOOP-10650. Add ability to specify a reverse ACL (black list) of users and 
groups. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618482)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestServiceAuthorization.java


 Add ability to specify a reverse ACL (black list) of users and groups
 -

 Key: HADOOP-10650
 URL: https://issues.apache.org/jira/browse/HADOOP-10650
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10650.patch, HADOOP-10650.patch, 
 HADOOP-10650.patch


 Currently , it is possible to define a ACL (user and groups) for a service. 
 To temporarily remove authorization for a set of users, administrator needs 
 to remove the users from the specific group and this may be a lengthy process 
 ( update ldap groups, flush caches on machines).
  If there is a facility to define a reverse ACL for services, then 
 administrator can disable users by specifying the users in reverse ACL. In 
 other words, one can specify a whitelist of users and groups as well as a 
 blacklist of users and groups. 
 One can also specify a default blacklist to disable the users from accessing 
 any service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-08-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100810#comment-14100810
 ] 

Jason Lowe commented on HADOOP-10059:
-

+1 lgtm.  Committing this.


 RPC authentication and authorization metrics overflow to negative values on 
 busy clusters
 -

 Key: HADOOP-10059
 URL: https://issues.apache.org/jira/browse/HADOOP-10059
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.23.9, 2.2.0
Reporter: Jason Lowe
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch


 The RPC metrics for authorization and authentication successes can easily 
 overflow to negative values on a busy cluster that has been up for a long 
 time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-08-18 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-10059:


   Resolution: Fixed
Fix Version/s: 2.6.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks to Tsuyoshi and Akira for the contribution and to Luke for additional 
review!  I committed this to trunk and branch-2.

 RPC authentication and authorization metrics overflow to negative values on 
 busy clusters
 -

 Key: HADOOP-10059
 URL: https://issues.apache.org/jira/browse/HADOOP-10059
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.23.9, 2.2.0
Reporter: Jason Lowe
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch


 The RPC metrics for authorization and authentication successes can easily 
 overflow to negative values on a busy cluster that has been up for a long 
 time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10335) An ip whilelist based implementation to resolve Sasl properties per connection

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100824#comment-14100824
 ] 

Hudson commented on HADOOP-10335:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1867/])
HADOOP-10335. An ip whilelist based implementation to resolve Sasl properties 
per connection. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618503)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
HADOOP-10335. Undo checkin to resolve test build issue. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618487)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java
HADOOP-10335. An ip whilelist based implementation to resolve Sasl properties 
per connection. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618484)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CombinedIPWhiteList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/IPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java


 An ip whilelist based implementation to resolve Sasl properties per connection
 --

 Key: HADOOP-10335
 URL: https://issues.apache.org/jira/browse/HADOOP-10335
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10335.patch, HADOOP-10335.patch, 
 HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf


 As noted in HADOOP-10221, it 

[jira] [Commented] (HADOOP-10650) Add ability to specify a reverse ACL (black list) of users and groups

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100823#comment-14100823
 ] 

Hudson commented on HADOOP-10650:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1867/])
HADOOP-10650. Add ability to specify a reverse ACL (black list) of users and 
groups. (Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618482)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestServiceAuthorization.java


 Add ability to specify a reverse ACL (black list) of users and groups
 -

 Key: HADOOP-10650
 URL: https://issues.apache.org/jira/browse/HADOOP-10650
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10650.patch, HADOOP-10650.patch, 
 HADOOP-10650.patch


 Currently , it is possible to define a ACL (user and groups) for a service. 
 To temporarily remove authorization for a set of users, administrator needs 
 to remove the users from the specific group and this may be a lengthy process 
 ( update ldap groups, flush caches on machines).
  If there is a facility to define a reverse ACL for services, then 
 administrator can disable users by specifying the users in reverse ACL. In 
 other words, one can specify a whitelist of users and groups as well as a 
 blacklist of users and groups. 
 One can also specify a default blacklist to disable the users from accessing 
 any service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100859#comment-14100859
 ] 

Allen Wittenauer commented on HADOOP-9913:
--

With a bit more sleep, I have a few thoughts on this that it would be good for 
someone to validate:

a) Wouldn't this be better as documentation?

b) Doesn't changing the metric like this actually 'change the metric' that is 
collected as well?  i.e., this is an incompatible change?

 Document time unit to RpcMetrics.java
 -

 Key: HADOOP-9913
 URL: https://issues.apache.org/jira/browse/HADOOP-9913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
 Environment: trunk
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
 HADOOP-9913.patch


 In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
 {code}
@Metric(Queue time) MutableRate rpcQueueTime;
@Metric(Processsing time) MutableRate rpcProcessingTime;
 {code}
 Since some users may confuse which unit (sec or msec) is correct, they should 
 be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100870#comment-14100870
 ] 

Hudson commented on HADOOP-10059:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6085 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6085/])
HADOOP-10059. RPC authentication and authorization metrics overflow to negative 
values on busy clusters. Contributed by Tsuyoshi OZAWA and Akira AJISAKA 
(jlowe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1618659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


 RPC authentication and authorization metrics overflow to negative values on 
 busy clusters
 -

 Key: HADOOP-10059
 URL: https://issues.apache.org/jira/browse/HADOOP-10059
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.23.9, 2.2.0
Reporter: Jason Lowe
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch


 The RPC metrics for authorization and authentication successes can easily 
 overflow to negative values on a busy cluster that has been up for a long 
 time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10307) Support multiple Authentication mechanisms for HTTP

2014-08-18 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100917#comment-14100917
 ] 

Benoy Antony commented on HADOOP-10307:
---

Thanks for the comments, [~tucu00].  I like the idea of using headers. I'll 
explore that and update the patch. 

 Support multiple Authentication mechanisms for HTTP
 ---

 Key: HADOOP-10307
 URL: https://issues.apache.org/jira/browse/HADOOP-10307
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10307.patch, HADOOP-10307.patch, 
 HADOOP-10307.patch


 Currently it is possible to specify a custom Authentication Handler  for HTTP 
 authentication.  
 We have a requirement to support multiple mechanisms  to authenticate HTTP 
 access.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100931#comment-14100931
 ] 

Sangjin Lee commented on HADOOP-10893:
--

Thanks for the comment, Kihwal.

I agree that if the default is now in the source directly the need to re-define 
the same default in mapred-default.xml is less than optimal. I like the idea of 
printing out the system classes when the application classloader is 
instantiated.

Having said that, how about the definition of the environment variable on the 
client classloader usage side (which was added in the latest patch)? To be 
symmetric, I think it should be removed again as well.

Thoughts?

 isolated classloader on the client side
 ---

 Key: HADOOP-10893
 URL: https://issues.apache.org/jira/browse/HADOOP-10893
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.4.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
 HADOOP-10893.patch, HADOOP-10893.patch, classloader-test.tar.gz


 We have the job classloader on the mapreduce tasks that run on the cluster. 
 It has a benefit of being able to isolate class space for user code and avoid 
 version clashes.
 Although it occurs less often, version clashes do occur on the client JVM. It 
 would be good to introduce an isolated classloader on the client side as well 
 to address this. A natural point to introduce this may be through RunJar, as 
 that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10973) Native Libraries Guide contains format error at Usage point 4.

2014-08-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100946#comment-14100946
 ] 

Arpit Agarwal commented on HADOOP-10973:


+1 I'll commit this soon.

 Native Libraries Guide contains format error at Usage point 4.
 --

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0

 Attachments: HADOOP-10973.patch


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksumming

2014-08-18 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100960#comment-14100960
 ] 

Todd Lipcon commented on HADOOP-10975:
--

(moved to HADOOP project since this is only the Hadoop side changes)

 org.apache.hadoop.util.DataChecksum should support native checksumming
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksumming

2014-08-18 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon moved HDFS-6561 to HADOOP-10975:


Component/s: (was: performance)
 (was: hdfs-client)
 (was: datanode)
 performance
Key: HADOOP-10975  (was: HDFS-6561)
Project: Hadoop Common  (was: Hadoop HDFS)

 org.apache.hadoop.util.DataChecksum should support native checksumming
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100957#comment-14100957
 ] 

Sangjin Lee commented on HADOOP-10893:
--

On the other hand, mapred-default.xml had a good description on the format of 
the system classes value:

{panel}
A comma-separated list of classes that should be loaded from the system 
classpath, not the user-supplied JARs, when mapreduce.job.classloader is 
enabled. Names ending in '.' (period) are treated as package names, and names 
starting with a '-' are treated as negative matches.
{panel}

We could move that to the javadoc of ApplicationClassLoader, but that's a 
little less than satisfying, as users (not developers) are the ones who need to 
override this value.

 isolated classloader on the client side
 ---

 Key: HADOOP-10893
 URL: https://issues.apache.org/jira/browse/HADOOP-10893
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.4.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
 HADOOP-10893.patch, HADOOP-10893.patch, classloader-test.tar.gz


 We have the job classloader on the mapreduce tasks that run on the cluster. 
 It has a benefit of being able to isolate class space for user code and avoid 
 version clashes.
 Although it occurs less often, version clashes do occur on the client JVM. It 
 would be good to introduce an isolated classloader on the client side as well 
 to address this. A natural point to introduce this may be through RunJar, as 
 that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksumming

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100966#comment-14100966
 ] 

Colin Patrick McCabe commented on HADOOP-10975:
---

Hmm.  I just committed this under the old HDFS name.  I also tried to change it 
to HADOOP, but found it couldn't be done without making it no longer a subtask.

 org.apache.hadoop.util.DataChecksum should support native checksumming
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksumming

2014-08-18 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10975:
--

   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

 org.apache.hadoop.util.DataChecksum should support native checksumming
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10973:
---

Assignee: Peter Klavins

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0

 Attachments: HADOOP-10973.patch


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10973:
---

Summary: Native Libraries Guide contains format error  (was: Native 
Libraries Guide contains format error at Usage point 4.)

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0

 Attachments: HADOOP-10973.patch


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10973:
---

  Resolution: Fixed
   Fix Version/s: 2.6.0
Target Version/s: 2.6.0  (was: 3.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks for the contribution [~PKRoma]. I added 
you as a contributor and assigned the issue to you.

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksumming

2014-08-18 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100975#comment-14100975
 ] 

Todd Lipcon commented on HADOOP-10975:
--

yea, Colin and I had a race condition. I was renaming the JIRA while he was 
committing it under the old name. I updated CHANGES.txt to refer to the new 
name.

Anyway, thanks for the contribution, James

 org.apache.hadoop.util.DataChecksum should support native checksumming
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-18 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-10975:
-

Summary: org.apache.hadoop.util.DataChecksum should support native checksum 
calculation  (was: org.apache.hadoop.util.DataChecksum should support native 
checksumming)

 org.apache.hadoop.util.DataChecksum should support native checksum calculation
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100979#comment-14100979
 ] 

Sangjin Lee commented on HADOOP-10893:
--

I suppose we can remove the (redundant) value but keep the description. I'll 
post an updated patch shortly.

 isolated classloader on the client side
 ---

 Key: HADOOP-10893
 URL: https://issues.apache.org/jira/browse/HADOOP-10893
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.4.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
 HADOOP-10893.patch, HADOOP-10893.patch, classloader-test.tar.gz


 We have the job classloader on the mapreduce tasks that run on the cluster. 
 It has a benefit of being able to isolate class space for user code and avoid 
 version clashes.
 Although it occurs less often, version clashes do occur on the client JVM. It 
 would be good to introduce an isolated classloader on the client side as well 
 to address this. A natural point to introduce this may be through RunJar, as 
 that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-4921) libhdfs.html needs to be updated to point to the new location

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-4921:
-

Target Version/s: 3.0.0, 2.6.0  (was: 2.6.0)
  Status: Open  (was: Patch Available)

 libhdfs.html needs to be updated to point to the new location 
 --

 Key: HADOOP-4921
 URL: https://issues.apache.org/jira/browse/HADOOP-4921
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.0
Reporter: Giridharan Kesavan
  Labels: newbie
 Attachments: HADOOP4921-01.patch


 libhds.so is now moved to a different location , but the libhdfs.html is 
 still pointing to the old location , hence the document has to be updated to 
 point to the new location c++/os_osarch_jvmdatamodel/lib'
 see bug for details
 https://issues.apache.org/jira/browse/HADOOP-3344
 Thanks,
 -Giri



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101000#comment-14101000
 ] 

Colin Patrick McCabe commented on HADOOP-10975:
---

thanks, Todd and James.

 org.apache.hadoop.util.DataChecksum should support native checksum calculation
 --

 Key: HADOOP-10975
 URL: https://issues.apache.org/jira/browse/HADOOP-10975
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
 HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-7713) dfs -count -q should label output column

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101008#comment-14101008
 ] 

Allen Wittenauer commented on HADOOP-7713:
--

The problem with creating a flag to show something that will be the default is 
that a year from now we'll have wasted a perfectly good flag.

 dfs -count -q should label output column
 

 Key: HADOOP-7713
 URL: https://issues.apache.org/jira/browse/HADOOP-7713
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nigel Daley
Assignee: Jonathan Allen
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch, 
 HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch, 
 HADOOP-7713.patch, HADOOP-7713.patch


 These commands should label the output columns:
 {code}
 hadoop dfs -count dir...dir
 hadoop dfs -count -q dir...dir
 {code}
 Current output of the 2nd command above:
 {code}
 % hadoop dfs -count -q /user/foo /tmp
 none inf 9569 9493 6372553322 
 hdfs://nn1.bar.com/user/foo
 none inf  101 2689   209349812906 
 hdfs://nn1.bar.com/tmp
 {code}
 It is not obvious what these columns mean.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop common fails to detect java_libarch on ppc64le

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101131#comment-14101131
 ] 

Colin Patrick McCabe commented on HADOOP-10968:
---

Thanks for this, Dinar.

{code}
+ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES ^(powerpc|ppc)64le)
+IF(EXISTS ${_JAVA_HOME}/jre/lib/ppc64le)
+SET(_java_libarch ppc64le)
+ELSE()
+SET(_java_libarch ppc64)
+ENDIF()
{code}

I don't understand how this works with big-endian powerpcs.  I don't think this 
regex will match powerpc or ppc, but only powerpc64le and ppc64le, 
right?  Do big-endian powerpcs show up as ppc64le?  That seems odd.

I also don't understand why we're checking for the existence of a directory to 
decide on the architecture... is this really the only option?

Of course, I might be completely off base here, since I don't have access to 
any ppc systems.

 hadoop common fails to detect java_libarch on ppc64le
 -

 Key: HADOOP-10968
 URL: https://issues.apache.org/jira/browse/HADOOP-10968
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Dinar Valeev
 Fix For: 0.23.2

 Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch


 [INFO] 
 [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
 [INFO] Executing tasks
 main:
  [exec] -- The C compiler identification is GNU 4.8.3
  [exec] -- The CXX compiler identification is GNU 4.8.3
  [exec] -- Check for working C compiler: /usr/bin/cc
  [exec] -- Check for working C compiler: /usr/bin/cc -- works
  [exec] -- Detecting C compiler ABI info
  [exec] -- Detecting C compiler ABI info - done
  [exec] -- Check for working CXX compiler: /usr/bin/c++
  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
  [exec] 
 JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
 JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Detecting CXX compiler ABI info
  [exec] -- Detecting CXX compiler ABI info - done
  [exec] -- Configuring incomplete, errors occurred!
  [exec] See also 
 /root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log.
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
 s]
 [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
 s]
 [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
 s]
 [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
 s]
 [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
 s]
 [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
 s]
 [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
 s]
 [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
 s]
 [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
 s]
 [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
 s]
 [INFO] Apache Hadoop NFS .. SKIPPED
 [INFO] Apache Hadoop Common Project ... SKIPPED
 [INFO] Apache Hadoop HDFS . SKIPPED
 [INFO] Apache Hadoop HttpFS ... SKIPPED
 [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
 [INFO] Apache Hadoop HDFS-NFS . SKIPPED
 [INFO] Apache Hadoop HDFS Project . SKIPPED
 [INFO] hadoop-yarn  SKIPPED
 [INFO] hadoop-yarn-api  SKIPPED
 [INFO] hadoop-yarn-common . SKIPPED
 [INFO] hadoop-yarn-server . SKIPPED
 [INFO] hadoop-yarn-server-common .. SKIPPED
 [INFO] hadoop-yarn-server-nodemanager . SKIPPED
 [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
 [INFO] hadoop-yarn-server-tests ... SKIPPED
 [INFO] hadoop-yarn-client . SKIPPED
 [INFO] hadoop-yarn-applications 

[jira] [Commented] (HADOOP-10956) Fix create-release script to include docs in the binary

2014-08-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101148#comment-14101148
 ] 

Karthik Kambatla commented on HADOOP-10956:
---

The script should also copy LICENSE, NOTICE and README txt files to the 
top-level directory. 

 Fix create-release script to include docs in the binary
 ---

 Key: HADOOP-10956
 URL: https://issues.apache.org/jira/browse/HADOOP-10956
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker

 The create-release script doesn't include docs in the binary tarball. We 
 should fix that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10972:
--

Assignee: Peter Klavins

 Native Libraries Guide contains mis-spelt build line
 

 Key: HADOOP-10972
 URL: https://issues.apache.org/jira/browse/HADOOP-10972
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.3.0
Reporter: Peter Klavins
Assignee: Peter Klavins
  Labels: documentation, newbie
 Attachments: HADOOP-10972.patch


 The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
 't' in the build line. The correct build line is:
 {code:none}
 $ mvn package -Pdist,native -DskipTests -Dtar
 {code}
 Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10972:
--

   Resolution: Fixed
Fix Version/s: 2.6.0
   3.0.0
   Status: Resolved  (was: Patch Available)

+1. Committing to trunk and branch-2.

Thanks!

 Native Libraries Guide contains mis-spelt build line
 

 Key: HADOOP-10972
 URL: https://issues.apache.org/jira/browse/HADOOP-10972
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.3.0
Reporter: Peter Klavins
Assignee: Peter Klavins
  Labels: documentation, newbie
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10972.patch


 The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
 't' in the build line. The correct build line is:
 {code:none}
 $ mvn package -Pdist,native -DskipTests -Dtar
 {code}
 Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10884) Fix dead link in Configuration javadoc

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10884:
--

   Resolution: Fixed
Fix Version/s: 2.6.0
   3.0.0
   Status: Resolved  (was: Patch Available)

+1. Committing to trunk and branch-2.  

Thanks!

 Fix dead link in Configuration javadoc
 --

 Key: HADOOP-10884
 URL: https://issues.apache.org/jira/browse/HADOOP-10884
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.0.2-alpha
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10884.patch


 In [Configuration 
 javadoc|http://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/conf/Configuration.html],
  the link to core-default.xml is dead. We should fix it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10946) Fix a bunch of typos in log messages

2014-08-18 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10946:


Status: Open  (was: Patch Available)

Will submit renamed patch for proper testing.

 Fix a bunch of typos in log messages
 

 Key: HADOOP-10946
 URL: https://issues.apache.org/jira/browse/HADOOP-10946
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Ray Chiang
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10946-04.patch, HADOOP10946-01.patch, 
 HADOOP10946-02.patch, HADOOP10946-03.patch


 There are a bunch of typos in various log messages.  These need cleaning up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10946) Fix a bunch of typos in log messages

2014-08-18 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10946:


Status: Patch Available  (was: Open)

Submit for testing.

 Fix a bunch of typos in log messages
 

 Key: HADOOP-10946
 URL: https://issues.apache.org/jira/browse/HADOOP-10946
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Ray Chiang
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10946-04.patch, HADOOP10946-01.patch, 
 HADOOP10946-02.patch, HADOOP10946-03.patch


 There are a bunch of typos in various log messages.  These need cleaning up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10946) Fix a bunch of typos in log messages

2014-08-18 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10946:


Attachment: HADOOP-10946-04.patch

Renamed patch for gathering proper test results.

 Fix a bunch of typos in log messages
 

 Key: HADOOP-10946
 URL: https://issues.apache.org/jira/browse/HADOOP-10946
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Ray Chiang
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10946-04.patch, HADOOP10946-01.patch, 
 HADOOP10946-02.patch, HADOOP10946-03.patch


 There are a bunch of typos in various log messages.  These need cleaning up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-08-18 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10321:


Attachment: HADOOP-10321-02.patch

Renamed file for getting test results.

 TestCompositeService should cover all enumerations of adding a service to a 
 parent service
 --

 Key: HADOOP-10321
 URL: https://issues.apache.org/jira/browse/HADOOP-10321
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Karthik Kambatla
Assignee: Naren Koneru
  Labels: supportability, test
 Attachments: HADOOP-10321-02.patch, HADOOP10321-01.patch


 HADOOP-10085 fixes some synchronization issues in 
 CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10971) Flag to make `hadoop fs -ls` print filenames only

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101200#comment-14101200
 ] 

Colin Patrick McCabe commented on HADOOP-10971:
---

Adding {{\-C}} sounds like a good idea.

 Flag to make `hadoop fs -ls` print filenames only
 -

 Key: HADOOP-10971
 URL: https://issues.apache.org/jira/browse/HADOOP-10971
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.3.0
Reporter: Ryan Williams

 It would be useful to have a flag that made {{hadoop fs -ls}} only print 
 filenames, instead of full {{stat}} info. The {{-C}} flag from GNU {{ls}} 
 is the closest analog to this behavior that I've found, so I propose that as 
 the flag.
 Per [this stackoverflow answer|http://stackoverflow.com/a/21574829], I've 
 reluctantly added a {{hadoop-ls-C}} wrapper that expands to {{hadoop fs -ls 
 $@ | sed 1d | perl -wlne'print +(split  ,$_,8)\[7\]'}} to a few projects 
 I've worked on, and it would obviously be nice to have hadoop save me (and 
 others) from such hackery.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2014-08-18 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-10615:
-

Attachment: HADOOP-10615-2.patch

update patch based on latest Hadoop

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101212#comment-14101212
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

I reviewed the code the best I can. I only reviewed core hadoop and hdfs 
changes. It is is really hard given some code formatting is mixed real 
improvements etc. This is a change that could have been done in a feature 
branch. [~aw], certainly reviews could have been made easier that way. That 
said, thank you for cleaning up the scripts. It is looks much better now!

Comments:
# bin/hadoop not longer checks for hdfs commands portmap and nfs3. Is this 
intentional?
# hadoop-daemon.sh usage no longer prints --hosts optional paramter in usage; 
this is intentional right? Also does all daemons now support option status 
along with start and stop?
# locating HADOOP_PREFIX is repeated in bin/hadoop and hadoop-daemon.sh (this 
can be optimized in a future patch)
# start-all.sh and stop-all.sh exits with warning. Why retain code after that. 
Expect users to delete the exit in the beginning?jj
# hadoop_error is not used in some cases and still echo is used. 
# hadoop-env.sh - we should document the GC configuration for max, min, young 
generation starting and max size. Also think that secondary namenode should 
just be set to primary namenode settings. This can be done in another jira. BTW 
nice job for explicitly specifying the overridable functionas in hadoop-env.sh!
# cowsay is cute. But can get annoying :). Hopefully hadoop_usage is in every 
script (I checked, it is).


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101217#comment-14101217
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

BTW I forgot to include the main part of my comment. +1 for the patch with the 
comments addressed (and comments which explicitly states things can be done in 
another jira can be done separately).

Thanks [~aw] for the rewrite!

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop common fails to detect java_libarch on ppc64le

2014-08-18 Thread Dinar Valeev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101221#comment-14101221
 ] 

Dinar Valeev commented on HADOOP-10968:
---

On ppc64 (big-endian) javaarch matches ppc64. While on ppc64le it can be 
ppc64(openjdk) or ppc64le(IBM Java).
openJDK since 2.5 sets ppc64 as java arch, openJDK lower than 2.5 has ppc64le. 
IBM Java still sets ppc64le so we need that check for backward compatibility.
It is no possible to run LE code on ppc64, so there is no chance to see ppc64le 
libarch on big-endian. 

 hadoop common fails to detect java_libarch on ppc64le
 -

 Key: HADOOP-10968
 URL: https://issues.apache.org/jira/browse/HADOOP-10968
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Dinar Valeev
 Fix For: 0.23.2

 Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch


 [INFO] 
 [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
 [INFO] Executing tasks
 main:
  [exec] -- The C compiler identification is GNU 4.8.3
  [exec] -- The CXX compiler identification is GNU 4.8.3
  [exec] -- Check for working C compiler: /usr/bin/cc
  [exec] -- Check for working C compiler: /usr/bin/cc -- works
  [exec] -- Detecting C compiler ABI info
  [exec] -- Detecting C compiler ABI info - done
  [exec] -- Check for working CXX compiler: /usr/bin/c++
  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
  [exec] 
 JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
 JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Detecting CXX compiler ABI info
  [exec] -- Detecting CXX compiler ABI info - done
  [exec] -- Configuring incomplete, errors occurred!
  [exec] See also 
 /root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log.
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
 s]
 [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
 s]
 [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
 s]
 [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
 s]
 [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
 s]
 [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
 s]
 [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
 s]
 [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
 s]
 [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
 s]
 [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
 s]
 [INFO] Apache Hadoop NFS .. SKIPPED
 [INFO] Apache Hadoop Common Project ... SKIPPED
 [INFO] Apache Hadoop HDFS . SKIPPED
 [INFO] Apache Hadoop HttpFS ... SKIPPED
 [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
 [INFO] Apache Hadoop HDFS-NFS . SKIPPED
 [INFO] Apache Hadoop HDFS Project . SKIPPED
 [INFO] hadoop-yarn  SKIPPED
 [INFO] hadoop-yarn-api  SKIPPED
 [INFO] hadoop-yarn-common . SKIPPED
 [INFO] hadoop-yarn-server . SKIPPED
 [INFO] hadoop-yarn-server-common .. SKIPPED
 [INFO] hadoop-yarn-server-nodemanager . SKIPPED
 [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
 [INFO] hadoop-yarn-server-tests ... SKIPPED
 [INFO] hadoop-yarn-client . SKIPPED
 [INFO] hadoop-yarn-applications ... SKIPPED
 [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
 [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
 [INFO] hadoop-yarn-site ... SKIPPED
 [INFO] hadoop-yarn-project  SKIPPED
 [INFO] hadoop-mapreduce-client 

[jira] [Updated] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-08-18 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10321:
---

Status: Open  (was: Patch Available)

 TestCompositeService should cover all enumerations of adding a service to a 
 parent service
 --

 Key: HADOOP-10321
 URL: https://issues.apache.org/jira/browse/HADOOP-10321
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Karthik Kambatla
Assignee: Naren Koneru
  Labels: supportability, test
 Attachments: HADOOP-10321-02.patch, HADOOP10321-01.patch


 HADOOP-10085 fixes some synchronization issues in 
 CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-08-18 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10321:


Target Version/s:   (was: 2.4.0)
  Status: Patch Available  (was: Open)

Re-submit for testing.

 TestCompositeService should cover all enumerations of adding a service to a 
 parent service
 --

 Key: HADOOP-10321
 URL: https://issues.apache.org/jira/browse/HADOOP-10321
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Karthik Kambatla
Assignee: Naren Koneru
  Labels: supportability, test
 Attachments: HADOOP-10321-02.patch, HADOOP10321-01.patch


 HADOOP-10085 fixes some synchronization issues in 
 CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-08-18 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101237#comment-14101237
 ] 

Robert Kanter commented on HADOOP-10321:


LGTM

 TestCompositeService should cover all enumerations of adding a service to a 
 parent service
 --

 Key: HADOOP-10321
 URL: https://issues.apache.org/jira/browse/HADOOP-10321
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Karthik Kambatla
Assignee: Naren Koneru
  Labels: supportability, test
 Attachments: HADOOP-10321-02.patch, HADOOP10321-01.patch


 HADOOP-10085 fixes some synchronization issues in 
 CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10334) make user home directory customizable

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101246#comment-14101246
 ] 

Colin Patrick McCabe commented on HADOOP-10334:
---

We should also do this for FileContext (see HADOOP-10944.)

 make user home directory customizable
 -

 Key: HADOOP-10334
 URL: https://issues.apache.org/jira/browse/HADOOP-10334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Kevin Odell
Assignee: Yongjun Zhang
Priority: Minor
 Attachments: HADOOP-10334.001.patch


 The path is currently hardcoded:
 public Path getHomeDirectory() {
 return makeQualified(new Path(/user/ + dfs.ugi.getShortUserName()));
   }
 It would be nice to have that as a customizable value.  
 Thank you



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10970) Cleanup KMS configuration keys

2014-08-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10970:
-

Attachment: hadoop-10970.002.patch

Here's a patch that just touches KMS stuff.

 Cleanup KMS configuration keys
 --

 Key: HADOOP-10970
 URL: https://issues.apache.org/jira/browse/HADOOP-10970
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10970.001.patch, hadoop-10970.002.patch


 It'd be nice to add descriptions to the config keys in kms-site.xml.
 Also, it'd be good to rename key.provider.path to key.provider.uri for 
 clarity, or just drop .path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10893) isolated classloader on the client side

2014-08-18 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-10893:
-

Attachment: HADOOP-10893.patch

Updated the patch:
- removed the redundant default definitions from mapred-default.xml and 
hadoop-config.sh/cmd
- clarified the descriptions so that the respective configuration is to 
override the default, not to define it

 isolated classloader on the client side
 ---

 Key: HADOOP-10893
 URL: https://issues.apache.org/jira/browse/HADOOP-10893
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.4.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
 HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
 classloader-test.tar.gz


 We have the job classloader on the mapreduce tasks that run on the cluster. 
 It has a benefit of being able to isolate class space for user code and avoid 
 version clashes.
 Although it occurs less often, version clashes do occur on the client JVM. It 
 would be good to introduce an isolated classloader on the client side as well 
 to address this. A natural point to introduce this may be through RunJar, as 
 that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop common fails to detect java_libarch on ppc64le

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101303#comment-14101303
 ] 

Colin Patrick McCabe commented on HADOOP-10968:
---

OK, so this patch only helps little-endian PPC machines.  That's fine.  Someone 
else can contribute a patch for the big-endian ones later if they want.

I am +1 on this patch.  Will wait a day or so to commit in case anyone else 
wants to comment.

 hadoop common fails to detect java_libarch on ppc64le
 -

 Key: HADOOP-10968
 URL: https://issues.apache.org/jira/browse/HADOOP-10968
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Dinar Valeev
 Fix For: 0.23.2

 Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch


 [INFO] 
 [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
 [INFO] Executing tasks
 main:
  [exec] -- The C compiler identification is GNU 4.8.3
  [exec] -- The CXX compiler identification is GNU 4.8.3
  [exec] -- Check for working C compiler: /usr/bin/cc
  [exec] -- Check for working C compiler: /usr/bin/cc -- works
  [exec] -- Detecting C compiler ABI info
  [exec] -- Detecting C compiler ABI info - done
  [exec] -- Check for working CXX compiler: /usr/bin/c++
  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
  [exec] 
 JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
 JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Detecting CXX compiler ABI info
  [exec] -- Detecting CXX compiler ABI info - done
  [exec] -- Configuring incomplete, errors occurred!
  [exec] See also 
 /root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log.
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
 s]
 [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
 s]
 [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
 s]
 [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
 s]
 [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
 s]
 [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
 s]
 [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
 s]
 [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
 s]
 [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
 s]
 [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
 s]
 [INFO] Apache Hadoop NFS .. SKIPPED
 [INFO] Apache Hadoop Common Project ... SKIPPED
 [INFO] Apache Hadoop HDFS . SKIPPED
 [INFO] Apache Hadoop HttpFS ... SKIPPED
 [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
 [INFO] Apache Hadoop HDFS-NFS . SKIPPED
 [INFO] Apache Hadoop HDFS Project . SKIPPED
 [INFO] hadoop-yarn  SKIPPED
 [INFO] hadoop-yarn-api  SKIPPED
 [INFO] hadoop-yarn-common . SKIPPED
 [INFO] hadoop-yarn-server . SKIPPED
 [INFO] hadoop-yarn-server-common .. SKIPPED
 [INFO] hadoop-yarn-server-nodemanager . SKIPPED
 [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
 [INFO] hadoop-yarn-server-tests ... SKIPPED
 [INFO] hadoop-yarn-client . SKIPPED
 [INFO] hadoop-yarn-applications ... SKIPPED
 [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
 [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
 [INFO] hadoop-yarn-site ... SKIPPED
 [INFO] hadoop-yarn-project  SKIPPED
 [INFO] hadoop-mapreduce-client  SKIPPED
 [INFO] hadoop-mapreduce-client-core ... SKIPPED
 [INFO] 

[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays

2014-08-18 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-9601:


Resolution: Incomplete
Status: Resolved  (was: Patch Available)

Closing this issue.

btw, I found out bad interaction between between GC  getArrayCritical when the 
memory is fragmented.

This is faster until it gets slow all of a sudden.

Please pass in the isCopy and run with G1GC to make sure it is doing zero-copy 
ops for getArrayRegion.

 Support native CRC on byte arrays
 -

 Key: HADOOP-9601
 URL: https://issues.apache.org/jira/browse/HADOOP-9601
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Gopal V
  Labels: perfomance
 Attachments: HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch, 
 HADOOP-9601-bench.patch, HADOOP-9601-rebase+benchmark.patch, 
 HADOOP-9601-trunk-rebase-2.patch, HADOOP-9601-trunk-rebase.patch


 When we first implemented the Native CRC code, we only did so for direct byte 
 buffers, because these correspond directly to native heap memory and thus 
 make it easy to access via JNI. We'd generally assumed that accessing byte[] 
 arrays from JNI was not efficient enough, but now that I know more about JNI 
 I don't think that's true -- we just need to make sure that the critical 
 sections where we lock the buffers are short.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Peter Klavins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Klavins updated HADOOP-10973:
---

Attachment: image004.png
image003.png
image002.png
image001.png

Hi Arpit,

 

Thanks for adding me as a contributor, I appreciate it. Is there any further 
involvement required from me as the reporter in the issue management process? I 
have confirmed that the source code has been updated in trunk and branch-2, I 
just can’t see any documentation advising me as to whether there is any further 
step I should take as the reporter, e.g., close the issue.

 

Regards,

 

Peter



 Peter Klavins

  mailto:klav...@netspace.net.au klav...@netspace.net.au

 Mobile +353 87 693 9879

 

From: Arpit Agarwal (JIRA) [mailto:j...@apache.org] 
Sent: Monday, 18 August 2014 7:10 PM
To: klav...@netspace.net.au
Subject: [jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format 
error

 





 https://issues.apache.org/jira/secure/ViewProfile.jspa?name=arpitagarwal 
Arpit Agarwal updated  https://issues.apache.org/jira/browse/HADOOP-10973 
HADOOP-10973 



  



Committed to trunk and branch-2. Thanks for the contribution  
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=PKRoma Peter 
Klavins. I added you as a contributor and assigned the issue to you.



 https://issues.apache.org/jira/browse/HADOOP Hadoop Common /  
https://issues.apache.org/jira/browse/HADOOP-10973  
https://issues.apache.org/jira/browse/HADOOP-10973 HADOOP-10973 


 https://issues.apache.org/jira/browse/HADOOP-10973 Native Libraries Guide 
contains format error 



Change By: 

 https://issues.apache.org/jira/secure/ViewProfile.jspa?name=arpitagarwal 
Arpit Agarwal 


Resolution: 

Fixed 


Fix Version/s: 

2.6.0 


Target Version/s: 

3 2 . 0 6 .0 


Hadoop Flags: 

Reviewed 


Status: 

Patch Available Resolved 




 https://issues.apache.org/jira/browse/HADOOP-10973#add-comment 

 https://issues.apache.org/jira/browse/HADOOP-10973#add-comment Add Comment 


  



This message was sent by Atlassian JIRA (v6.2#6252-sha1:aa34325) 




 



 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch, image001.png, image002.png, 
 image003.png, image004.png


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10282) Create a FairCallQueue: a multi-level call queue which schedules incoming calls and multiplexes outgoing calls

2014-08-18 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101320#comment-14101320
 ] 

Chris Li commented on HADOOP-10282:
---

I think you're right, that doc was from an older version without that quirk. 
Attaching patch with revised doc shortly.

 Create a FairCallQueue: a multi-level call queue which schedules incoming 
 calls and multiplexes outgoing calls
 --

 Key: HADOOP-10282
 URL: https://issues.apache.org/jira/browse/HADOOP-10282
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Attachments: HADOOP-10282.patch, HADOOP-10282.patch


 The FairCallQueue ensures quality of service by altering the order of RPC 
 calls internally. 
 It consists of three parts:
 1. a Scheduler (`HistoryRpcScheduler` is provided) which provides a priority 
 number from 0 to N (0 being highest priority)
 2. a Multi-level queue (residing in `FairCallQueue`) which provides a way to 
 keep calls in priority order internally
 3. a Multiplexer (`WeightedRoundRobinMultiplexer` is provided) which provides 
 logic to control which queue to take from
 Currently the Mux and Scheduler are not pluggable, but they probably should 
 be (up for discussion).
 This is how it is used:
 // Production
 1. Call is created and given to the CallQueueManager
 2. CallQueueManager requests a `put(T call)` into the `FairCallQueue` which 
 implements `BlockingQueue`
 3. `FairCallQueue` asks its scheduler for a scheduling decision, which is an 
 integer e.g. 12
 4. `FairCallQueue` inserts Call into the 12th queue: 
 `queues.get(12).put(call)`
 // Consumption
 1. CallQueueManager requests `take()` or `poll()` on FairCallQueue
 2. `FairCallQueue` asks its multiplexer for which queue to draw from, which 
 will also be an integer e.g. 2
 3. `FairCallQueue` draws from this queue if it has an available call (or 
 tries other queues if it is empty)
 Additional information is available in the linked JIRAs regarding the 
 Scheduler and Multiplexer's roles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101316#comment-14101316
 ] 

Arpit Agarwal edited comment on HADOOP-10973 at 8/18/14 9:19 PM:
-

Hi Arpit,

 

Thanks for adding me as a contributor, I appreciate it. Is there any further 
involvement required from me as the reporter in the issue management process? I 
have confirmed that the source code has been updated in trunk and branch-2, I 
just can’t see any documentation advising me as to whether there is any further 
step I should take as the reporter, e.g., close the issue.

 

Regards,

 

Peter



 Peter Klavins

  mailto:klav...@netspace.net.au klav...@netspace.net.au

 Mobile +353 87 693 9879



was (Author: pkroma):
Hi Arpit,

 

Thanks for adding me as a contributor, I appreciate it. Is there any further 
involvement required from me as the reporter in the issue management process? I 
have confirmed that the source code has been updated in trunk and branch-2, I 
just can’t see any documentation advising me as to whether there is any further 
step I should take as the reporter, e.g., close the issue.

 

Regards,

 

Peter



 Peter Klavins

  mailto:klav...@netspace.net.au klav...@netspace.net.au

 Mobile +353 87 693 9879

 

From: Arpit Agarwal (JIRA) [mailto:j...@apache.org] 
Sent: Monday, 18 August 2014 7:10 PM
To: klav...@netspace.net.au
Subject: [jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format 
error

 





 https://issues.apache.org/jira/secure/ViewProfile.jspa?name=arpitagarwal 
Arpit Agarwal updated  https://issues.apache.org/jira/browse/HADOOP-10973 
HADOOP-10973 



  



Committed to trunk and branch-2. Thanks for the contribution  
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=PKRoma Peter 
Klavins. I added you as a contributor and assigned the issue to you.



 https://issues.apache.org/jira/browse/HADOOP Hadoop Common /  
https://issues.apache.org/jira/browse/HADOOP-10973  
https://issues.apache.org/jira/browse/HADOOP-10973 HADOOP-10973 


 https://issues.apache.org/jira/browse/HADOOP-10973 Native Libraries Guide 
contains format error 



Change By: 

 https://issues.apache.org/jira/secure/ViewProfile.jspa?name=arpitagarwal 
Arpit Agarwal 


Resolution: 

Fixed 


Fix Version/s: 

2.6.0 


Target Version/s: 

3 2 . 0 6 .0 


Hadoop Flags: 

Reviewed 


Status: 

Patch Available Resolved 




 https://issues.apache.org/jira/browse/HADOOP-10973#add-comment 

 https://issues.apache.org/jira/browse/HADOOP-10973#add-comment Add Comment 


  



This message was sent by Atlassian JIRA (v6.2#6252-sha1:aa34325) 




 



 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch, image001.png, image002.png, 
 image003.png, image004.png


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101330#comment-14101330
 ] 

Arpit Agarwal commented on HADOOP-10973:


Hi Peter, no action is needed from you. The issue will be closed when 2.6.0 is 
released.

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch, image001.png, image002.png, 
 image003.png, image004.png


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10970) Cleanup KMS configuration keys

2014-08-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101334#comment-14101334
 ] 

Alejandro Abdelnur commented on HADOOP-10970:
-

+1 pending jenkins.

 Cleanup KMS configuration keys
 --

 Key: HADOOP-10970
 URL: https://issues.apache.org/jira/browse/HADOOP-10970
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10970.001.patch, hadoop-10970.002.patch


 It'd be nice to add descriptions to the config keys in kms-site.xml.
 Also, it'd be good to rename key.provider.path to key.provider.uri for 
 clarity, or just drop .path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays

2014-08-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101335#comment-14101335
 ] 

Colin Patrick McCabe commented on HADOOP-9601:
--

bq. btw, I found out bad interaction between between GC  getArrayCritical when 
the memory is fragmented.  This is faster until it gets slow all of a sudden.  
Please pass in the isCopy and run with G1GC to make sure it is doing zero-copy 
ops for getArrayRegion.

Interesting.

The documentation says this about {{GetPrimitiveArrayCritical}}:

bq. After calling GetPrimitiveArrayCritical, the native code should not run for 
an extended period of time before it calls ReleasePrimitiveArrayCritical. We 
must treat the code inside this pair of functions as running in a critical 
region. Inside a critical region, native code must not call other JNI 
functions, or any system call that may cause the current thread to block and 
wait for another Java thread. (For example, the current thread must not call 
read on a stream being written by another Java thread.)

This is exactly what we're doing in the HADOOP-10838 patch.  We call 
{{GetPrimitiveArrayCritical}}, do the checksums, and then immediately call 
{{ReleasePrimitiveArrayCritical}}.  If the JVM chooses not to take the 
zero-copy route, we can't override its decision.  And we can't access that 
array without calling one of the accessor functions.  So I don't know how this 
could be improved; do you have any ideas?

 Support native CRC on byte arrays
 -

 Key: HADOOP-9601
 URL: https://issues.apache.org/jira/browse/HADOOP-9601
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Gopal V
  Labels: perfomance
 Attachments: HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch, 
 HADOOP-9601-bench.patch, HADOOP-9601-rebase+benchmark.patch, 
 HADOOP-9601-trunk-rebase-2.patch, HADOOP-9601-trunk-rebase.patch


 When we first implemented the Native CRC code, we only did so for direct byte 
 buffers, because these correspond directly to native heap memory and thus 
 make it easy to access via JNI. We'd generally assumed that accessing byte[] 
 arrays from JNI was not efficient enough, but now that I know more about JNI 
 I don't think that's true -- we just need to make sure that the critical 
 sections where we lock the buffers are short.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays

2014-08-18 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101342#comment-14101342
 ] 

Todd Lipcon commented on HADOOP-9601:
-

Hey Gopal,

As far as I know, no existing production garbage collector actually fragments 
arrays. Instead, if you try to allocate an array larger than the available 
contiguous free heap space, it will force a compacting GC inline with the 
allocation.

This is confirmed by the JDK7 source:

{code}
JNI_ENTRY(void*, jni_GetPrimitiveArrayCritical(JNIEnv *env, jarray array, 
jboolean *isCopy))
  JNIWrapper(GetPrimitiveArrayCritical);
  DTRACE_PROBE3(hotspot_jni, GetPrimitiveArrayCritical__entry, env, array, 
isCopy);
  GC_locker::lock_critical(thread);
  if (isCopy != NULL) {
*isCopy = JNI_FALSE;
  }
  oop a = JNIHandles::resolve_non_null(array);
  assert(a-is_array(), just checking);
  BasicType type;
  if (a-is_objArray()) {
type = T_OBJECT;
  } else {
type = typeArrayKlass::cast(a-klass())-element_type();
  }
  void* ret = arrayOop(a)-base(type);
  DTRACE_PROBE1(hotspot_jni, GetPrimitiveArrayCritical__return, ret);
  return ret;
JNI_END
{code}

Note that {{isCopy}} is always set to JNI_FALSE. I checked the jdk9 source as 
well 
(http://hg.openjdk.java.net/jdk9/hs/hotspot/file/7a0fe19ac034/src/share/vm/prims/jni.cpp)
 and found the same.

So, not sure why GetPrimitiveArrayCritical would ever slow down. The only 
reason it would ever go slow is if the {{GC_locker::lock_critical}} call blocks 
- this is the case if there is a pending safepoint. So, maybe in your 
application there were other threads which were blocking safepoints for a long 
time, and GetPrimitiveArrayCritical was feeling the effects?

 Support native CRC on byte arrays
 -

 Key: HADOOP-9601
 URL: https://issues.apache.org/jira/browse/HADOOP-9601
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Gopal V
  Labels: perfomance
 Attachments: HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch, 
 HADOOP-9601-bench.patch, HADOOP-9601-rebase+benchmark.patch, 
 HADOOP-9601-trunk-rebase-2.patch, HADOOP-9601-trunk-rebase.patch


 When we first implemented the Native CRC code, we only did so for direct byte 
 buffers, because these correspond directly to native heap memory and thus 
 make it easy to access via JNI. We'd generally assumed that accessing byte[] 
 arrays from JNI was not efficient enough, but now that I know more about JNI 
 I don't think that's true -- we just need to make sure that the critical 
 sections where we lock the buffers are short.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Peter Klavins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Klavins updated HADOOP-10973:
---

Attachment: (was: image004.png)

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch, image001.png


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Peter Klavins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Klavins updated HADOOP-10973:
---

Attachment: (was: image002.png)

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch, image001.png


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Peter Klavins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Klavins updated HADOOP-10973:
---

Attachment: (was: image003.png)

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch, image001.png


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-18 Thread Peter Klavins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Klavins updated HADOOP-10973:
---

Attachment: (was: image001.png)

 Native Libraries Guide contains format error
 

 Key: HADOOP-10973
 URL: https://issues.apache.org/jira/browse/HADOOP-10973
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Peter Klavins
Assignee: Peter Klavins
Priority: Minor
  Labels: apt, documentation, xdocs
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10973.patch


 The move from xdocs to APT introduced a formatting bug so that the sub-list 
 under Usage point 4 was merged into the text itself and no longer appeared as 
 a sub-list. Compare xdocs version 
 http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
  to APT version 
 http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
 The patch is to trunk, but is also valid for released versions 0.23.11, 
 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
 deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101361#comment-14101361
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

bq. bin/hadoop not longer checks for hdfs commands portmap and nfs3. Is this 
intentional?

Yes.  Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one?  In any case, I 
didn't see a reason to have an explicit check for something that never existed 
as a result, especially considering how much other, actually deprecated stuff 
is there.

(It could be argued that for trunk all of these deprecations should be removed 
since it's going to be a major release since they were put in. In other words, 
they were in 1.x, deprecated in 2.x, and if this is going into 3.x, we could 
remove them. There's some discussion on that in this jira.)

bq. hadoop-daemon.sh usage no longer prints --hosts optional paramter in usage; 
this is intentional right? 

Correct.  --hosts only ever worked as far as back as I looked with 
hadoop-daemons.sh (plural) and related commands.  The --hosts in 
hadoop-daemon.sh's (singular) usage was a very longstanding (and amusing) bug.

bq. Also does all daemons now support option status along with start and stop?

If those daemons use the shell daemon framework (hadoop_daemon_handler, 
hadoop_secure_daemon_handler, etc) in hadoop-functions, yes.  So, barring bugs 
or different functionality in the Java code, this should cover all current 
daemons started by yarn, hdfs, and mapred. This means kms, httpfs, etc, that 
haven't been converted yet unfortunately do not. I've got another jira open to 
rewrite those to use the new stuff.

To cover what I suspect is the future question, if one adds a daemon following 
the pattern (daemon=true being the big one) to the current commands, that 
daemon will get the status handling and more (stop, start, logs, pids, etc) for 
free.  This also means that if we add, e.g. 'restart', all daemons will get it 
too.  Consolidating all of this daemon handling makes this much much easier.  
There is some other cleanup that should probably happen here to make it easier 
to add new --daemon capability though. (e.g., changing hadoop_usage everywhere 
is a pain.)

bq. locating HADOOP_PREFIX is repeated in bin/hadoop and hadoop-daemon.sh (this 
can be optimized in a future patch)

It's intentional because we need to run through the initialization code to find 
where the hdfs command lives.  Totally agree it's ugly, but with the 
hadoop-layout.sh code that was introduced in 0.21, we're sort of stuck here. 
FWIW, mapred and yarn have the same ugliness.

bq. start-all.sh and stop-all.sh exits with warning. Why retain code after 
that. Expect users to delete the exit in the beginning?

I started to clean this up but realized it could wait. So at some point, I plan 
to clean this up and make it functional, esp wrt HADOOP-6590 and some... 
tricks. ;)  I didn't see any harm in leaving the code there for reference. 
Plus, as you noticed, if someone wanted to make their own, they could pull it 
out, delete those lines, and be on their way.

bq.  hadoop_error is not used in some cases and still echo is used.

Correct.  hadoop_error isn't defined yet in some situations so the script has 
to echo to stderr manually.  In particular, when the code is looking for 
HADOOP_LIBEXEC_DIR and the location of hadoop-functions.sh... so that it can 
define those functions. ;)

bq. hadoop-env.sh - we should document the GC configuration for max, min, young 
generation starting and max size. 

This should probably be a part of HADOOP-10950.  I'm going to rework the 
generic heap management to allow for setting Xms, get rid of JAVA_HEAP, etc. 
Since this is another (but thankfully smaller) touch everything JIRA, it'd be 
great if you could update that one with what you had in mind.  (I think I know 
what you have in mind, since I suspect this reflects upon the examples I put in 
for NN, etc GC stuff. )

bq. hadoop_usage is in every script (I checked, it is).

Shame on you for ruining my easter egg... but your check wasn't very thorough. 
;)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, 

[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101386#comment-14101386
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

bq. Yes. Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one? In any case, I didn't 
see a reason to have an explicit check for something that never existed as a 
result, especially considering how much other, actually deprecated stuff is 
there.
When you say hooked into hadoop command, do you mean usage? If so, that might 
be a bug. [~brandonli], can bin/hadoop be used to start the nfs gateway and 
portmap? In that case in bin/hadoop may need to include it in the case to 
trigger those commands using hdfs script.

bq. Shame on you for ruining my easter egg... but your check wasn't very 
thorough
Sorry. I know one now. A script named with three letters? Did I miss more?


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101386#comment-14101386
 ] 

Suresh Srinivas edited comment on HADOOP-9902 at 8/18/14 9:57 PM:
--

bq. Yes. Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one? In any case, I didn't 
see a reason to have an explicit check for something that never existed as a 
result, especially considering how much other, actually deprecated stuff is 
there.
When you say hooked into hadoop command, do you mean usage? If so, that might 
be a bug. [~brandonli], can bin/hadoop be used to start the nfs gateway and 
portmap? In that case in bin/hadoop may need to include it in the case to 
trigger those commands using hdfs script.

bq. Shame on you for ruining my easter egg... 
Sorry

bq. but your check wasn't very thorough
I know one now. A script named with three letters? Did I miss more?



was (Author: sureshms):
bq. Yes. Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one? In any case, I didn't 
see a reason to have an explicit check for something that never existed as a 
result, especially considering how much other, actually deprecated stuff is 
there.
When you say hooked into hadoop command, do you mean usage? If so, that might 
be a bug. [~brandonli], can bin/hadoop be used to start the nfs gateway and 
portmap? In that case in bin/hadoop may need to include it in the case to 
trigger those commands using hdfs script.

bq. Shame on you for ruining my easter egg... but your check wasn't very 
thorough
Sorry. I know one now. A script named with three letters? Did I miss more?


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: HADOOP-9902-15.patch

I'll commit this after a jenkins run.

-15 fixes the missing line in the copyright in hadoop-config.sh.  That's sort 
of important...

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Open  (was: Patch Available)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays

2014-08-18 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101388#comment-14101388
 ] 

Gopal V commented on HADOOP-9601:
-

[~tlipcon]: Thanks, I never chased this issue down.

Been a year or so, but this patch fell off my TODO list because of a bad perf 
run with the DataNode.

 Support native CRC on byte arrays
 -

 Key: HADOOP-9601
 URL: https://issues.apache.org/jira/browse/HADOOP-9601
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Gopal V
  Labels: perfomance
 Attachments: HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch, 
 HADOOP-9601-bench.patch, HADOOP-9601-rebase+benchmark.patch, 
 HADOOP-9601-trunk-rebase-2.patch, HADOOP-9601-trunk-rebase.patch


 When we first implemented the Native CRC code, we only did so for direct byte 
 buffers, because these correspond directly to native heap memory and thus 
 make it easy to access via JNI. We'd generally assumed that accessing byte[] 
 arrays from JNI was not efficient enough, but now that I know more about JNI 
 I don't think that's true -- we just need to make sure that the critical 
 sections where we lock the buffers are short.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Patch Available  (was: Open)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101401#comment-14101401
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

bq. When you say hooked into hadoop command, do you mean usage? 

Nope. I specifically mean 'hadoop portmap' and 'hadoop nfs3' never worked. The 
code always declared them as a deprecated command and to run hdfs instead.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101401#comment-14101401
 ] 

Allen Wittenauer edited comment on HADOOP-9902 at 8/18/14 10:07 PM:


bq. When you say hooked into hadoop command, do you mean usage? 

Nope. I specifically mean 'hadoop portmap' and 'hadoop nfs3' never worked. The 
code always declared them as a deprecated command and to run hdfs instead.

bq. I know one now. A script named with three letters? Did I miss more?

That's my secret. ;)


was (Author: aw):
bq. When you say hooked into hadoop command, do you mean usage? 

Nope. I specifically mean 'hadoop portmap' and 'hadoop nfs3' never worked. The 
code always declared them as a deprecated command and to run hdfs instead.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101406#comment-14101406
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

bq. Nope. I specifically mean 'hadoop portmap' and 'hadoop nfs3' never worked. 
The code always declared them as a deprecated command and to run hdfs instead.

Doesn't the following from old script print warning and delegate nfs3 and 
portmap to hdfs script?
{noformat}
namenode|secondarynamenode|datanode|dfs|dfsadmin|fsck|balancer|fetchdt|oiv|dfsgroups|portmap|nfs3)
echo DEPRECATED: Use of this script to execute hdfs command is 
deprecated. 12
echo Instead use the hdfs command for it. 12
echo  12
#try to locate hdfs and if present, delegate to it.  
shift
if [ -f ${HADOOP_HDFS_HOME}/bin/hdfs ]; then
  exec ${HADOOP_HDFS_HOME}/bin/hdfs ${COMMAND/dfsgroups/groups}  $@
elif [ -f ${HADOOP_PREFIX}/bin/hdfs ]; then
  exec ${HADOOP_PREFIX}/bin/hdfs ${COMMAND/dfsgroups/groups} $@
else
  echo HADOOP_HDFS_HOME not found!
  exit 1
fi
;;
{noformat}

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101413#comment-14101413
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

bq. Doesn't the following from old script print warning and delegate nfs3 and 
portmap to hdfs script?

It does.  But if you notice, all of those other commands were in Hadoop 1.x... 
before the hdfs command existed.  portmap and nfs3 came way way way after that. 
 In other words, running e.g. 'hadoop portmap' as a command was never 
documented as valid.  So the only way someone would run that would be 
accidentally.  If we do that for every command that someone might accidentally 
run, we're gonna be in for a bad time.


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101420#comment-14101420
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

Looks like I'm wrong:

http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html

Why oh why did we document this using deprecated usage?  I'll make a -16 that 
puts these back and file a jira to fix the documentation. :(



 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101424#comment-14101424
 ] 

Brandon Li commented on HADOOP-9902:


{quote} ... running e.g. 'hadoop portmap' as a command was never documented as 
valid.{quote}
We actually documented it in 2.3 release:
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html

But, we can update the NFS doc to use hdfs script instead from 3.0 onward. 



 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101425#comment-14101425
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

HDFS-6866 filed for the portmap and nfs3 option.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101425#comment-14101425
 ] 

Allen Wittenauer edited comment on HADOOP-9902 at 8/18/14 10:29 PM:


HDFS-6868 filed for the portmap and nfs3 option.


was (Author: aw):
HDFS-6866 filed for the portmap and nfs3 option.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Open  (was: Patch Available)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Patch Available  (was: Open)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
 HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
 HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
 HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
 hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: HADOOP-9902-16.patch

-16: re-deprecate the previously not deprecated but documented hadoop nfs3 and 
hadoop portmap subcommands

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
 HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
 HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
 HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
 hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10970) Cleanup KMS configuration keys

2014-08-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10970:
-

Attachment: hadoop-10970.003.patch

I'd like to squeeze in improvements for the kms-acls.xml file as well, new 
patch just changes that file.

 Cleanup KMS configuration keys
 --

 Key: HADOOP-10970
 URL: https://issues.apache.org/jira/browse/HADOOP-10970
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10970.001.patch, hadoop-10970.002.patch, 
 hadoop-10970.003.patch


 It'd be nice to add descriptions to the config keys in kms-site.xml.
 Also, it'd be good to rename key.provider.path to key.provider.uri for 
 clarity, or just drop .path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10970) Cleanup KMS configuration keys

2014-08-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101536#comment-14101536
 ] 

Alejandro Abdelnur commented on HADOOP-10970:
-

+1 again

 Cleanup KMS configuration keys
 --

 Key: HADOOP-10970
 URL: https://issues.apache.org/jira/browse/HADOOP-10970
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10970.001.patch, hadoop-10970.002.patch, 
 hadoop-10970.003.patch


 It'd be nice to add descriptions to the config keys in kms-site.xml.
 Also, it'd be good to rename key.provider.path to key.provider.uri for 
 clarity, or just drop .path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-08-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101540#comment-14101540
 ] 

Alejandro Abdelnur commented on HADOOP-10880:
-

I've talked with [~daryn] over the phone and he'd be OK on keeping the scope of 
this JIRA as initially intended, not adding digest stuff to it. For reasons 
along the lines the ones mentioned in my previous comment.

 Move HTTP delegation tokens out of URL querystring to a header
 --

 Key: HADOOP-10880
 URL: https://issues.apache.org/jira/browse/HADOOP-10880
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10880.patch, HADOOP-10880.patch, 
 HADOOP-10880.patch


 Following up on a discussion in HADOOP-10799.
 Because URLs are often logged, delegation tokens may end up in LOG files 
 while they are still valid. 
 We should move the tokens to a header.
 We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-08-18 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10880:


Status: Patch Available  (was: Open)

 Move HTTP delegation tokens out of URL querystring to a header
 --

 Key: HADOOP-10880
 URL: https://issues.apache.org/jira/browse/HADOOP-10880
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10880.patch, HADOOP-10880.patch, 
 HADOOP-10880.patch


 Following up on a discussion in HADOOP-10799.
 Because URLs are often logged, delegation tokens may end up in LOG files 
 while they are still valid. 
 We should move the tokens to a header.
 We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-08-18 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-10976:
-

 Summary: moving the source code of hadoop-tools docs to the 
directry under hadoop-tools
 Key: HADOOP-10976
 URL: https://issues.apache.org/jira/browse/HADOOP-10976
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Priority: Minor


Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-08-18 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-10976:
--

Attachment: HADOOP-10976-0.patch

attaching patch.
I fixed site index too.

 moving the source code of hadoop-tools docs to the directry under hadoop-tools
 --

 Key: HADOOP-10976
 URL: https://issues.apache.org/jira/browse/HADOOP-10976
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-10976-0.patch


 Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
 should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2014-08-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9870.
--

Resolution: Duplicate

Closing this as HADOOP-9902 contains a fix for this issue.

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-08-18 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HADOOP-10976:
-

Assignee: Masatake Iwasaki

 moving the source code of hadoop-tools docs to the directry under hadoop-tools
 --

 Key: HADOOP-10976
 URL: https://issues.apache.org/jira/browse/HADOOP-10976
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-10976-0.patch


 Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
 should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-08-18 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-10976:
--

Status: Patch Available  (was: Open)

 moving the source code of hadoop-tools docs to the directry under hadoop-tools
 --

 Key: HADOOP-10976
 URL: https://issues.apache.org/jira/browse/HADOOP-10976
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-10976-0.patch


 Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
 should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >