[jira] [Commented] (HADOOP-10139) Single Cluster Setup document is unfriendly

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886373#comment-13886373
 ] 

Hadoop QA commented on HADOOP-10139:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626067/HADOOP-10139.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3504//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3504//console

This message is automatically generated.

 Single Cluster Setup document is unfriendly
 ---

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-10139.2.patch, HADOOP-10139.3.patch, 
 HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2014-01-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10309:


Status: Patch Available  (was: Open)

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2014-01-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886448#comment-13886448
 ] 

Steve Loughran commented on HADOOP-10309:
-

submitting patch, though as jenkins doesn't run the S3 tests it'll need a 
manual run through. 

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886472#comment-13886472
 ] 

Hadoop QA commented on HADOOP-10309:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12625989/HADOOP-10309.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3505//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3505//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3505//console

This message is automatically generated.

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10112) har file listing doesn't work with wild card

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886502#comment-13886502
 ] 

Hudson commented on HADOOP-10112:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #466 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/466/])
Remove HADOOP-10112 from CHANGES.txt (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562566)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.3.0

 Attachments: HADOOP-10112.004.patch


 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10305) Add rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals to core-default.xml

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886503#comment-13886503
 ] 

Hudson commented on HADOOP-10305:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #466 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/466/])
HADOOP-10305. Add rpc.metrics.quantile.enable and 
rpc.metrics.percentiles.intervals to core-default.xml. Contributed by Akira 
Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 Add rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals to 
 core-default.xml
 -

 Key: HADOOP-10305
 URL: https://issues.apache.org/jira/browse/HADOOP-10305
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.3.0

 Attachments: HADOOP-10305.patch


 rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals were 
 added in HADOOP-9420, but these two parameters are not written in 
 core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10305) Add rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals to core-default.xml

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886569#comment-13886569
 ] 

Hudson commented on HADOOP-10305:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1683 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1683/])
HADOOP-10305. Add rpc.metrics.quantile.enable and 
rpc.metrics.percentiles.intervals to core-default.xml. Contributed by Akira 
Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 Add rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals to 
 core-default.xml
 -

 Key: HADOOP-10305
 URL: https://issues.apache.org/jira/browse/HADOOP-10305
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.3.0

 Attachments: HADOOP-10305.patch


 rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals were 
 added in HADOOP-9420, but these two parameters are not written in 
 core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10112) har file listing doesn't work with wild card

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886568#comment-13886568
 ] 

Hudson commented on HADOOP-10112:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1683 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1683/])
Remove HADOOP-10112 from CHANGES.txt (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562566)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.3.0

 Attachments: HADOOP-10112.004.patch


 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10112) har file listing doesn't work with wild card

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886586#comment-13886586
 ] 

Hudson commented on HADOOP-10112:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1658 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1658/])
Remove HADOOP-10112 from CHANGES.txt (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562566)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.3.0

 Attachments: HADOOP-10112.004.patch


 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10305) Add rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals to core-default.xml

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886587#comment-13886587
 ] 

Hudson commented on HADOOP-10305:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1658 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1658/])
HADOOP-10305. Add rpc.metrics.quantile.enable and 
rpc.metrics.percentiles.intervals to core-default.xml. Contributed by Akira 
Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 Add rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals to 
 core-default.xml
 -

 Key: HADOOP-10305
 URL: https://issues.apache.org/jira/browse/HADOOP-10305
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.3.0

 Attachments: HADOOP-10305.patch


 rpc.metrics.quantile.enable and rpc.metrics.percentiles.intervals were 
 added in HADOOP-9420, but these two parameters are not written in 
 core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10312) Shell.ExitCodeException to have more useful toString

2014-01-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-10312:
---

 Summary: Shell.ExitCodeException to have more useful toString
 Key: HADOOP-10312
 URL: https://issues.apache.org/jira/browse/HADOOP-10312
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Steve Loughran
Priority: Minor


Shell's ExitCodeException doesn't include the exit code in the toString value, 
so isn't that useful in diagnosing container start failures in YARN



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10312) Shell.ExitCodeException to have more useful toString

2014-01-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886610#comment-13886610
 ] 

Steve Loughran commented on HADOOP-10312:
-

Example

{code}
2014-01-30 14:20:12,042 [AMRM Callback Handler Thread] INFO  HoyaAppMaster.yarn 
(HoyaAppMaster.java:onContainersCompleted(839)) - Container Completion for 
containerID=container_1391075472386_0004_01_32, state=COMPLETE, 
exitStatus=1, diagnostics=Exception from container-launch: 
org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

it failed, but it's not obvious why

 Shell.ExitCodeException to have more useful toString
 

 Key: HADOOP-10312
 URL: https://issues.apache.org/jira/browse/HADOOP-10312
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Steve Loughran
Priority: Minor

 Shell's ExitCodeException doesn't include the exit code in the toString 
 value, so isn't that useful in diagnosing container start failures in YARN



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10310) SaslRpcServer should be initialized even when no secret manager present

2014-01-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886678#comment-13886678
 ] 

Daryn Sharp commented on HADOOP-10310:
--

Oops.  +1

 SaslRpcServer should be initialized even when no secret manager present
 ---

 Key: HADOOP-10310
 URL: https://issues.apache.org/jira/browse/HADOOP-10310
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Blocker
 Attachments: HADOOP-10310.patch


 HADOOP-8783 made a change which caused the SaslRpcServer not to be 
 initialized if there is no secret manager present. This works fine for most 
 Hadoop daemons because they need a secret manager to do their business, but 
 JournalNodes do not. The result of this is that JournalNodes are broken and 
 will not handle RPCs in a Kerberos-enabled environment, since the 
 SaslRpcServer will not be initialized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10310) SaslRpcServer should be initialized even when no secret manager present

2014-01-30 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886685#comment-13886685
 ] 

Aaron T. Myers commented on HADOOP-10310:
-

Thanks a lot for the reviews, Andrew and Daryn. I'm going to commit this 
momentarily.

 SaslRpcServer should be initialized even when no secret manager present
 ---

 Key: HADOOP-10310
 URL: https://issues.apache.org/jira/browse/HADOOP-10310
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Blocker
 Attachments: HADOOP-10310.patch


 HADOOP-8783 made a change which caused the SaslRpcServer not to be 
 initialized if there is no secret manager present. This works fine for most 
 Hadoop daemons because they need a secret manager to do their business, but 
 JournalNodes do not. The result of this is that JournalNodes are broken and 
 will not handle RPCs in a Kerberos-enabled environment, since the 
 SaslRpcServer will not be initialized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10310) SaslRpcServer should be initialized even when no secret manager present

2014-01-30 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-10310:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk, branch-2, and branch-2.3.

Thanks again for the prompt reviews, gents. Much appreciated.

 SaslRpcServer should be initialized even when no secret manager present
 ---

 Key: HADOOP-10310
 URL: https://issues.apache.org/jira/browse/HADOOP-10310
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10310.patch


 HADOOP-8783 made a change which caused the SaslRpcServer not to be 
 initialized if there is no secret manager present. This works fine for most 
 Hadoop daemons because they need a secret manager to do their business, but 
 JournalNodes do not. The result of this is that JournalNodes are broken and 
 will not handle RPCs in a Kerberos-enabled environment, since the 
 SaslRpcServer will not be initialized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10310) SaslRpcServer should be initialized even when no secret manager present

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886692#comment-13886692
 ] 

Hudson commented on HADOOP-10310:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5068 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5068/])
HADOOP-10310. SaslRpcServer should be initialized even when no secret manager 
present. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562863)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 SaslRpcServer should be initialized even when no secret manager present
 ---

 Key: HADOOP-10310
 URL: https://issues.apache.org/jira/browse/HADOOP-10310
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10310.patch


 HADOOP-8783 made a change which caused the SaslRpcServer not to be 
 initialized if there is no secret manager present. This works fine for most 
 Hadoop daemons because they need a secret manager to do their business, but 
 JournalNodes do not. The result of this is that JournalNodes are broken and 
 will not handle RPCs in a Kerberos-enabled environment, since the 
 SaslRpcServer will not be initialized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-30 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-10295:


 Component/s: tools/distcp
Hadoop Flags: Reviewed

+1 patch looks good.

 Allow distcp to automatically identify the checksum type of source files and 
 use it for the target
 --

 Key: HADOOP-10295
 URL: https://issues.apache.org/jira/browse/HADOOP-10295
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.2.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
 hadoop-10295.patch


 Currently while doing distcp, users can use -Ddfs.checksum.type to specify 
 the checksum type in the target FS. This works fine if all the source files 
 are using the same checksum type. If files in the source cluster have mixed 
 types of checksum, users have to either use -skipcrccheck or have checksum 
 mismatching exception. Thus we may need to consider adding a new option to 
 distcp so that it can automatically identify the original checksum type of 
 each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10085:


Target Version/s:   (was: )
  Status: Open  (was: Patch Available)

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10085:


Attachment: HADOOP-10085-004.patch

patch rebased to trunk

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10085:


Status: Patch Available  (was: Open)

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10307) Support multiple Authentication mechanisms for HTTP

2014-01-30 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886775#comment-13886775
 ] 

Benoy Antony commented on HADOOP-10307:
---

@rkanter,   Using )AltKerberosAuthenticationHandler_ seems like a way to 
introduce multiple mechanisms as if its a single mechanism. This will result in 
one implementation which knows about all mechanisms. Not sure if that's the 
standard/right pattern to plugin multiple implementations of an interface to a 
framework. 

The approach that we have used to specify the different _AuthenticationHandler_ 
implementations directly in the configuration. The default implementation can 
be specified by keeping it at the beginning of the list. 


 Support multiple Authentication mechanisms for HTTP
 ---

 Key: HADOOP-10307
 URL: https://issues.apache.org/jira/browse/HADOOP-10307
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10307.patch


 Currently it is possible to specify a custom Authentication Handler  for HTTP 
 authentication.  
 We have a requirement to support multiple mechanisms  to authenticate HTTP 
 access.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886789#comment-13886789
 ] 

Hadoop QA commented on HADOOP-10085:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626120/HADOOP-10085-004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3506//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3506//console

This message is automatically generated.

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Doug Cutting (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886809#comment-13886809
 ] 

Doug Cutting commented on HADOOP-10311:
---

+1 We should generally avoid vendor names in our products, as they might appear 
to be endorsements or otherwise meant to bias users.

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Priority: Blocker





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10158) SPNEGO should work with multiple interfaces/SPNs.

2014-01-30 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886813#comment-13886813
 ] 

Benoy Antony commented on HADOOP-10158:
---

[~daryn] I have reviewed the patch. Thanks for the effort put in to satisfy our 
requirements.

This will not work for us since the patch canonicalizes the hostname with the 
call _request.getLocalName()_. This restricts the server principals that can be 
used to the canonicalized one.  If I have CNAME to HOSTNAME and want to 
authenticate with  both {HTTP/CNAME, HTTP/HOSTNAME} then that will not work. 
_request.getLocalName()_  will always return HOSTNAME. 

An alternate implementation can be like this:

1. During init, login all the principals specified in principal conf key . If 
no principals are specified, read all principals matching HTTP/*@* from keytab 
and login all of them and cache them with a servername-Realm map
This avoids any synchronization , caching and Realm determination.  This is an 
optional improvement. If needed, I can provide an implementation for this. 

2. In authenticate, look up a principal with _request.getServerName() _ in 
addition to  _request.getLocalName()_  .  This is required for us to work. This 
removes the canonicalization issue. 

 SPNEGO should work with multiple interfaces/SPNs.
 -

 Key: HADOOP-10158
 URL: https://issues.apache.org/jira/browse/HADOOP-10158
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-10158.patch, HADOOP-10158.patch, 
 HADOOP-10158_multiplerealms.patch, HADOOP-10158_multiplerealms.patch, 
 HADOOP-10158_multiplerealms.patch


 This is the list of internal servlets added by namenode.
 | Name | Auth | Need to be accessible by end users |
 | StartupProgressServlet | none | no |
 | GetDelegationTokenServlet | internal SPNEGO | yes |
 | RenewDelegationTokenServlet | internal SPNEGO | yes |
 |  CancelDelegationTokenServlet | internal SPNEGO | yes |
 |  FsckServlet | internal SPNEGO | yes |
 |  GetImageServlet | internal SPNEGO | no |
 |  ListPathsServlet | token in query | yes |
 |  FileDataServlet | token in query | yes |
 |  FileChecksumServlets | token in query | yes |
 | ContentSummaryServlet | token in query | yes |
 GetDelegationTokenServlet, RenewDelegationTokenServlet, 
 CancelDelegationTokenServlet and FsckServlet are accessed by end users, but 
 hard-coded to use the internal SPNEGO filter.
 If a name node HTTP server binds to multiple external IP addresses, the 
 internal SPNEGO service principal name may not work with an address to which 
 end users are connecting.  The current SPNEGO implementation in Hadoop is 
 limited to use a single service principal per filter.
 If the underlying hadoop kerberos authentication handler cannot easily be 
 modified, we can at least create a separate auth filter for the end-user 
 facing servlets so that their service principals can be independently 
 configured. If not defined, it should fall back to the current behavior.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-8432) SH script syntax errors

2014-01-30 Thread madhavi alla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886895#comment-13886895
 ] 

madhavi alla commented on HADOOP-8432:
--

Hi, Everyone
I have installed hadoop0.23.10 latest version
 sh hadoop-daemon.sh unable to start namenode
.sbin/../libexec/hadoop-config.sh: Syntax error: word unexpected (expecting ))
Could you please let us know how to resolve the above issue.

Thanks,
Madhavi

 SH script syntax errors
 ---

 Key: HADOOP-8432
 URL: https://issues.apache.org/jira/browse/HADOOP-8432
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0-alpha
 Environment: ubuntu 12, Oracle_JDK 1.7.0_03,
 env. variables setted to:
 export JAVA_HOME=/mnt/dataStorage/storage/local/glassfish3/jdk7
 export PATH=$PATH:$JAVA_HOME/bin
 export HADOOP_INSTALL=/mnt/dataStorage/storage/local/hadoop-2.0.0-alpha
 export PATH=$PATH:$HADOOP_INSTALL/bin
 export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
Reporter: sergio kosik
Priority: Blocker
 Fix For: 2.0.0-alpha


 Hi, Everyone
 I just can't start with new binary release of hadoop with following CLI 
 command:
 sh $HADOOP_INSTALL/sbin/start-dfs.sh
 ... /hadoop-2.0.0-alpha/sbin/start-dfs.sh: 78: ... 
 /hadoop-2.0.0-alpha/sbin/../libexec/hadoop-config.sh: Syntax error: word 
 unexpected (expecting ))
 Inside the script start-dfs.sh there are multiple syntax wrongs. Could you 
 fix it?
 Regards



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-30 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886903#comment-13886903
 ] 

Jing Zhao commented on HADOOP-10295:


Thanks for the review, Nicholas and Sangjin!

[~sjlee0], that is originally implicitly contained in the FileSystem#create 
call (see FileSystem#create(Path, boolean, int, short, long, Progressable)). I 
just pulled it out to make the code not too long.

 Allow distcp to automatically identify the checksum type of source files and 
 use it for the target
 --

 Key: HADOOP-10295
 URL: https://issues.apache.org/jira/browse/HADOOP-10295
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.2.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
 hadoop-10295.patch


 Currently while doing distcp, users can use -Ddfs.checksum.type to specify 
 the checksum type in the target FS. This works fine if all the source files 
 are using the same checksum type. If files in the source cluster have mixed 
 types of checksum, users have to either use -skipcrccheck or have checksum 
 mismatching exception. Thus we may need to consider adding a new option to 
 distcp so that it can automatically identify the original checksum type of 
 each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-30 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886894#comment-13886894
 ] 

Sangjin Lee commented on HADOOP-10295:
--

The patch looks good.

Just one question. I see now there is an explicit call to create the permission 
in copyToTmpFile(). What is the nature of this change? Was the same thing being 
done implicitly and it is just made explicit, or is there another reason?

 Allow distcp to automatically identify the checksum type of source files and 
 use it for the target
 --

 Key: HADOOP-10295
 URL: https://issues.apache.org/jira/browse/HADOOP-10295
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.2.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
 hadoop-10295.patch


 Currently while doing distcp, users can use -Ddfs.checksum.type to specify 
 the checksum type in the target FS. This works fine if all the source files 
 are using the same checksum type. If files in the source cluster have mixed 
 types of checksum, users have to either use -skipcrccheck or have checksum 
 mismatching exception. Thus we may need to consider adding a new option to 
 distcp so that it can automatically identify the original checksum type of 
 each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10139) Single Cluster Setup document is unfriendly

2014-01-30 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886926#comment-13886926
 ] 

Arpit Agarwal commented on HADOOP-10139:


+1 the instructions look great. I will commit this shortly.

 Single Cluster Setup document is unfriendly
 ---

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-10139.2.patch, HADOOP-10139.3.patch, 
 HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Work started] (HADOOP-10270) getfacl does not display effective permissions of masked entries.

2014-01-30 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10270 started by Chris Nauroth.

 getfacl does not display effective permissions of masked entries.
 -

 Key: HADOOP-10270
 URL: https://issues.apache.org/jira/browse/HADOOP-10270
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10270.1.patch


 The mask entry of an ACL can be changed to restrict permissions that would be 
 otherwise granted via named user and group entries.  In these cases, the 
 typical implementation of getfacl also displays the effective permissions 
 after applying the mask.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10270) getfacl does not display effective permissions of masked entries.

2014-01-30 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10270:
---

Attachment: HADOOP-10270.1.patch

I'm attaching a patch to display effective permissions from getfacl as needed.  
While I was in here, I also fixed a bug that caused getfacl to print the group 
entry with mask as the label when the file has only a default ACL (no access 
ACL).  I've also added 2 new tests to testAclCli.xml to cover both of those 
fixes.

 getfacl does not display effective permissions of masked entries.
 -

 Key: HADOOP-10270
 URL: https://issues.apache.org/jira/browse/HADOOP-10270
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10270.1.patch


 The mask entry of an ACL can be changed to restrict permissions that would be 
 otherwise granted via named user and group entries.  In these cases, the 
 typical implementation of getfacl also displays the effective permissions 
 after applying the mask.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10139) Update and improve the Single Cluster Setup document

2014-01-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10139:
---

Summary: Update and improve the Single Cluster Setup document  (was: Single 
Cluster Setup document is unfriendly)

 Update and improve the Single Cluster Setup document
 

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-10139.2.patch, HADOOP-10139.3.patch, 
 HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10313:
---

 Summary: Script and jenkins job to produce Hadoop release artifacts
 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


As discussed in the dev mailing lists, we should have a jenkins job to build 
the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886959#comment-13886959
 ] 

Alejandro Abdelnur commented on HADOOP-10313:
-

a {{create-release.sh}} script would produce the release artifacts. This script 
would be committed in the dev-support/ directory of the branch.

A parameterized jenkins jobs would take the branch name and the RC label and it 
would produce the release artifacts.

The SRC and BIN tarballs would be MD5 but not signed. The release manager 
should pick up the artifacts and sign them before pushing them to a public 
staging area.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur

 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10139) Update and improve the Single Cluster Setup document

2014-01-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10139:
---

  Resolution: Fixed
   Fix Version/s: 2.4.0
  3.0.0
Target Version/s: 2.4.0  (was: 2.3.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed this to trunk and branch-2.

Thanks for the contribution [~ajisakaa].

 Update and improve the Single Cluster Setup document
 

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10139.2.patch, HADOOP-10139.3.patch, 
 HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

initial version of the create-release.sh script.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886971#comment-13886971
 ] 

Alejandro Abdelnur commented on HADOOP-10313:
-

Jenkins job that will run the script: 
https://builds.apache.org/job/HADOOP2%20Release%20Artifacts%20Builder/

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-30 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886975#comment-13886975
 ] 

Jing Zhao commented on HADOOP-10295:


I will commit this patch later today if there is no more comment.

 Allow distcp to automatically identify the checksum type of source files and 
 use it for the target
 --

 Key: HADOOP-10295
 URL: https://issues.apache.org/jira/browse/HADOOP-10295
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.2.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
 hadoop-10295.patch


 Currently while doing distcp, users can use -Ddfs.checksum.type to specify 
 the checksum type in the target FS. This works fine if all the source files 
 are using the same checksum type. If files in the source cluster have mixed 
 types of checksum, users have to either use -skipcrccheck or have checksum 
 mismatching exception. Thus we may need to consider adding a new option to 
 distcp so that it can automatically identify the original checksum type of 
 each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10139) Update and improve the Single Cluster Setup document

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886981#comment-13886981
 ] 

Hudson commented on HADOOP-10139:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5072 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5072/])
HADOOP-10139. Update and improve the Single Cluster Setup document. 
(Contributed by Akira Ajisaka) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1562931)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm


 Update and improve the Single Cluster Setup document
 

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10139.2.patch, HADOOP-10139.3.patch, 
 HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

removing SVN and MVN clean up, not needed, fresh checkout

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: (was: create-release.sh)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: (was: create-release.sh)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10311:


Attachment: HADOOP-10311.patch

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10311.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-10311:
---

Assignee: Alejandro Abdelnur

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10311.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10311:


Status: Patch Available  (was: Open)

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10311.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887104#comment-13887104
 ] 

Sandy Ryza commented on HADOOP-10311:
-

+1

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10311.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887108#comment-13887108
 ] 

Alejandro Abdelnur commented on HADOOP-10311:
-

I should have caught this reference when doing the review, sorry about that. 
I'll commit after Jenkins +1s

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10311.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: (was: create-release.sh)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: (was: create-release.sh)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10314) The ls command help still shows outdated 0.16 format.

2014-01-30 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-10314:
---

 Summary: The ls command help still shows outdated 0.16 format.
 Key: HADOOP-10314
 URL: https://issues.apache.org/jira/browse/HADOOP-10314
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee


The description of output format is vastly outdated. It was changed after 
version 0.16.

{noformat}
$ hadoop fs -help ls
-ls [-d] [-h] [-R] [path ...]:List the contents that match the 
specified file pattern. If
path is not specified, the contents of /user/currentUser
will be listed. Directory entries are of the form 
dirName (full path) dir 
and file entries are of the form 
fileName(full path) r n size 
where n is the number of replicas specified for the file 
and size is the size of the file, in bytes.
  -d  Directories are listed as plain files.
  -h  Formats the sizes of files in a human-readable fashion
  rather than a number of bytes.
  -R  Recursively list the contents of directories.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887160#comment-13887160
 ] 

Hadoop QA commented on HADOOP-10311:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626184/HADOOP-10311.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3507//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3507//console

This message is automatically generated.

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Assignee: Alejandro Abdelnur
Priority: Blocker
 Attachments: HADOOP-10311.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2014-01-30 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-10315:
---

 Summary: Log the original exception when getGroups() fail in UGI.
 Key: HADOOP-10315
 URL: https://issues.apache.org/jira/browse/HADOOP-10315
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0, 0.23.10
Reporter: Kihwal Lee


In UserGroupInformation, getGroupNames() swallows the original exception. There 
have been many occasions that more information on the original exception could 
have helped.

{code}
  public synchronized String[] getGroupNames() {
ensureInitialized();
try {
  ListString result = groups.getGroups(getShortUserName());
  return result.toArray(new String[result.size()]);
} catch (IOException ie) {
  LOG.warn(No groups available for user  + getShortUserName());
  return new String[0];
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: (was: create-release.sh)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887263#comment-13887263
 ] 

stack commented on HADOOP-10313:


Alejandro, you want to add a bit of a comment on the head of the script 
explaining what it does and in which context it is used (should you say how to 
use it since it takes a RC_LABEL)? 

I tried the below manually and it works nicely:

HADOOP_VERSION=`cat pom.xml | grep version | head -1 | sed 's|^ 
*version||' | sed 's|/version.*$||'`

nit: remove the 'for' in following if you are going to make an new version: 
version for to (from a comment).

I suppose you have the md5 so can check when you download so you have some 
security about what it is that you are signing.

Otherwise looks great [~tucu00].  We have scripts building a release.  We 
should try and do as you do here and hoist them up to jenkins too.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10316) HadoopArchives#HArchiveInputFormat#getSplits() should check reader against null before calling close()

2014-01-30 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-10316:
---

 Summary: HadoopArchives#HArchiveInputFormat#getSplits() should 
check reader against null before calling close()
 Key: HADOOP-10316
 URL: https://issues.apache.org/jira/browse/HADOOP-10316
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Around line 267:
{code}
  finally { 
reader.close();
  }
{code}
reader should be checked against null



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

getting site generation right.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: (was: create-release.sh)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-30 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-10295:
---

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks to [~laurentgo] for the 
contribution! Thanks to Kihwal, Sangjin and Nicholas for the review!

 Allow distcp to automatically identify the checksum type of source files and 
 use it for the target
 --

 Key: HADOOP-10295
 URL: https://issues.apache.org/jira/browse/HADOOP-10295
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.2.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
 hadoop-10295.patch


 Currently while doing distcp, users can use -Ddfs.checksum.type to specify 
 the checksum type in the target FS. This works fine if all the source files 
 are using the same checksum type. If files in the source cluster have mixed 
 types of checksum, users have to either use -skipcrccheck or have checksum 
 mismatching exception. Thus we may need to consider adding a new option to 
 distcp so that it can automatically identify the original checksum type of 
 each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10297) FileChecksum should provide getChecksumOpt method

2014-01-30 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-10297:
---

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

The fix has been included in HADOOP-10295. Close this jira as duplicated. 
Thanks for the contribution, [~laurentgo]!

 FileChecksum should provide getChecksumOpt method
 -

 Key: HADOOP-10297
 URL: https://issues.apache.org/jira/browse/HADOOP-10297
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Laurent Goujon
 Attachments: hadoop-10297.patch


 o.a.h.f.FileSystem has several methods which accepts directly or indirectly a 
 ChecksumOpt parameter to configure checksum options, but there's no generic 
 way of querying checksum options used for a given file.
 MD5MD5CRC32FileChecksum used by DFSClient has a getChecksumOpt() but since 
 not just DistributedFileSystem is accepting a ChecksumOpt argument, but any 
 FileSystem subclass (although only DistributedFileSystem implements a 
 specific behaviour), I suggest to make getChecksumOpt an abstract method of 
 FileChecksum. This could be used by tools like DistCp to replicate checksum 
 options for example.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887280#comment-13887280
 ] 

Hudson commented on HADOOP-10295:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5077 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5077/])
HADOOP-10295. Allow distcp to automatically identify the checksum type of 
source files and use it for the target. Contributed by Jing Zhao and Laurent 
Goujon. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1563019)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileChecksum.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java


 Allow distcp to automatically identify the checksum type of source files and 
 use it for the target
 --

 Key: HADOOP-10295
 URL: https://issues.apache.org/jira/browse/HADOOP-10295
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.2.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
 hadoop-10295.patch


 Currently while doing distcp, users can use -Ddfs.checksum.type to specify 
 the checksum type in the target FS. This works fine if all the source files 
 are using the same checksum type. If files in the source cluster have mixed 
 types of checksum, users have to either use -skipcrccheck or have checksum 
 mismatching exception. Thus we may need to consider adding a new option to 
 distcp so that it can automatically identify the original checksum type of 
 each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: create-release.sh

Integrating [~stack] comments.

Regarding the md5 files. The idea is simply one thing less to do by the release 
manager, s/he would only have to sign the SRC and BIN tarball artifacts before 
staging them.

Once the Jenkins build finishes 
(https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/) and I verify 
things are ok, I'll put the script in patch form. And after commit, I'll modify 
the jenkins configuration to consume it from the checked out source itself.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Attachment: HADOOP-10313.patch

OK,

We have:

* A script, {{create-release.sh}}, that creates release artifacts
* An Apache Jenkins job that runs the script and produces the artifacts in 
Apache CI machines, thanks Yahoo! (or shouldn’t I say that?)

The Apache Jenkins job is:

  https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/

There you’ll see the output of an release build. When triggering the build, you 
can specify an RC_LABEL (RC0 in this case). If you do so all the artifact files 
will be postfixed with it.

The job is currently producing:

* RAT report
* SOURCE tarball and its MD5
* BINARY tarball and its MD5
* SITE tarball (ready to plaster in Apache Hadoop site)
* CHANGES files

I’ve verified the produced SOURCE is correct and I can build a BINARY out of it.

I’ve verified the produced BINARY tarball works (in pseudo-cluster mode).

Running {{hadoop-version}} from the BINARY a tarball reports:

{code}
$ bin/hadoop version
Hadoop 2.4.0-SNAPSHOT
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1563020
Compiled by jenkins on 2014-01-31T00:03Z
Compiled with protoc 2.5.0
From source with checksum 37ccb6f84b23196f521243fd192070
{code}

Once the JIRA is committed we have to modify the Jenkins job to use the script 
from {{dev-support/}} directory.

We could improve this script further to deploy the built JARs to the Maven 
repo. I don’t know how to do this, so it would be great if somebody that know 
how jumps on that. Maybe a s a follow up JIRA, so we have something going.


 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


Status: Patch Available  (was: Open)

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887327#comment-13887327
 ] 

Hadoop QA commented on HADOOP-10313:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626231/HADOOP-10313.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3508//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3508//console

This message is automatically generated.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-10317:


 Summary: Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 
2.3.0-SNAPSHOT
 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang


Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need to 
update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10317:
-

Attachment: hadoop-10317.patch

Patch attached. I used {{mvn versions:set -DnewVersion=2.3.0-SNAPSHOT}} and did 
a grep to make sure that there are no longer any references to 2.4.0-SNAPSHOT.

 Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
 ---

 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10317.patch


 Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need 
 to update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10317:
-

Status: Patch Available  (was: Open)

 Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
 ---

 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10317.patch


 Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need 
 to update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887345#comment-13887345
 ] 

Hadoop QA commented on HADOOP-10317:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626237/hadoop-10317.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3509//console

This message is automatically generated.

 Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
 ---

 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10317.patch


 Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need 
 to update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887368#comment-13887368
 ] 

Alejandro Abdelnur commented on HADOOP-10317:
-

+1. applied to branch-2.3 locally and did a full build, all JARs are 
2.3.0-SNAPSHOT.

Once you commit, you can take the release job for a spin:

https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/build?delay=0sec

And if works OK, please +1 HADOOP-10313

 Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
 ---

 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-10317.patch


 Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need 
 to update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10317:
-

   Resolution: Fixed
Fix Version/s: 2.3.0
   Status: Resolved  (was: Patch Available)

Thanks tucu, I just pushed this and updated CHANGES.txt in other branches. I'll 
take the jenkins job for a spin now.

 Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
 ---

 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-10317.patch


 Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need 
 to update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887392#comment-13887392
 ] 

Hudson commented on HADOOP-10317:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5079 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5079/])
HADOOP-10317. Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 
2.3.0-SNAPSHOT. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1563035)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
 ---

 Key: HADOOP-10317
 URL: https://issues.apache.org/jira/browse/HADOOP-10317
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-10317.patch


 Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need 
 to update them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10308) Remove from core-default.xml unsupported 'classic' and add 'yarn-tez' as value for mapreduce.framework.name property

2014-01-30 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887410#comment-13887410
 ] 

Harsh J commented on HADOOP-10308:
--

We can remove 'classic' as it brings no value today, agreed.

Last I checked, we do not ship Tez as part of Apache Hadoop, so why add an 
option that needs to be in Tez's docs instead, one that will also need Tez to 
be fully present in order to work, if one makes the switch to it?

We should instead alter the description to make it sound more generic instead, 
that the values are not limited to yarn and local, and can instead be set to 
IDs specified by other MR or MR-like runtimes.

 Remove from core-default.xml unsupported 'classic' and add 'yarn-tez' as 
 value for mapreduce.framework.name property
 

 Key: HADOOP-10308
 URL: https://issues.apache.org/jira/browse/HADOOP-10308
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Eric Charles
 Attachments: HADOOP-10308.patch


 Classic mr-v1 is no more supported in trunk.
 On the other hand, we will soon have yarn-tez implementation of mapreduce 
 (tez layer allowing to have a single AM for all map-reduce jobs).
 core-default.xml must reflect this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887414#comment-13887414
 ] 

Andrew Wang commented on HADOOP-10313:
--

+1, nice work tucu. I gave the bash script a quick review, but the proof is in 
the pudding. I tried the Jenkins job on the current branch-2.3, and the 
generated artifacts look good:

https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/16/

Setting this up to build nightlies of the latest 2.x release branch (and 
branch-2 also) would be super cool. That, with automatic mvn deploy ([~rvs] 
implied that it should just work from Jenkins slaves), means we can get real CI 
with bigtop going!

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10313:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.3.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.3.0

 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887457#comment-13887457
 ] 

Hudson commented on HADOOP-10313:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5080 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5080/])
HADOOP-10313. Script and jenkins job to produce Hadoop release artifacts. 
(tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1563043)
* /hadoop/common/trunk/dev-support/create-release.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.3.0

 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10139) Update and improve the Single Cluster Setup document

2014-01-30 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887463#comment-13887463
 ] 

Akira AJISAKA commented on HADOOP-10139:


Thanks for reviewing and committing, [~arpitagarwal]!

 Update and improve the Single Cluster Setup document
 

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10139.2.patch, HADOOP-10139.3.patch, 
 HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887474#comment-13887474
 ] 

stack commented on HADOOP-10313:


v2 lgtm

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.3.0

 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-10318:
---

 Summary: Incorrect reference to nodeFile in RumenToSLSConverter 
error message
 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
if (! nodeFile.getParentFile().exists()
 ! nodeFile.getParentFile().mkdirs()) {
  System.err.println(ERROR: Cannot create output directory in path: 
  + jsonFile.getParentFile().getAbsoluteFile());
{code}
jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HADOOP-10318:
-

Attachment: HADOOP-10318.patch

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan reassigned HADOOP-10318:


Assignee: Wei Yan

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Wei Yan
Priority: Minor
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HADOOP-10318:
-

Status: Patch Available  (was: Open)

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Wei Yan
Priority: Minor
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887502#comment-13887502
 ] 

Wei Yan commented on HADOOP-10318:
--

Thanks, [~yuzhih...@gmail.com]. Just upload a patch to fix that bug.

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10318:
---

Labels: newbie  (was: )

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Wei Yan
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887510#comment-13887510
 ] 

Hadoop QA commented on HADOOP-10318:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626263/HADOOP-10318.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3510//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3510//console

This message is automatically generated.

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Wei Yan
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10318:
---

Hadoop Flags: Reviewed

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Wei Yan
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10318) Incorrect reference to nodeFile in RumenToSLSConverter error message

2014-01-30 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887519#comment-13887519
 ] 

Akira AJISAKA commented on HADOOP-10318:


LGTM, +1.

 Incorrect reference to nodeFile in RumenToSLSConverter error message
 

 Key: HADOOP-10318
 URL: https://issues.apache.org/jira/browse/HADOOP-10318
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Wei Yan
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10318.patch


 {code}
 if (! nodeFile.getParentFile().exists()
  ! nodeFile.getParentFile().mkdirs()) {
   System.err.println(ERROR: Cannot create output directory in path: 
   + jsonFile.getParentFile().getAbsoluteFile());
 {code}
 jsonFile on the last line should be nodeFile



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-6350) Documenting Hadoop metrics

2014-01-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-6350:
--

Attachment: HADOOP-6350.5.patch

Updated the patch for the latest trunk.
[~arpitagarwal], would you please review it?

 Documenting Hadoop metrics
 --

 Key: HADOOP-6350
 URL: https://issues.apache.org/jira/browse/HADOOP-6350
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Hong Tang
Assignee: Akira AJISAKA
  Labels: metrics
 Attachments: HADOOP-6350-sample-1.patch, HADOOP-6350-sample-2.patch, 
 HADOOP-6350-sample-3.patch, HADOOP-6350.4.patch, HADOOP-6350.5.patch, 
 sample1.png


 Metrics should be part of public API, and should be clearly documented 
 similar to HADOOP-5073, so that we can reliably build tools on top of them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-6350) Documenting Hadoop metrics

2014-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887543#comment-13887543
 ] 

Hadoop QA commented on HADOOP-6350:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626268/HADOOP-6350.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3511//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3511//console

This message is automatically generated.

 Documenting Hadoop metrics
 --

 Key: HADOOP-6350
 URL: https://issues.apache.org/jira/browse/HADOOP-6350
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, metrics
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Hong Tang
Assignee: Akira AJISAKA
  Labels: metrics
 Attachments: HADOOP-6350-sample-1.patch, HADOOP-6350-sample-2.patch, 
 HADOOP-6350-sample-3.patch, HADOOP-6350.4.patch, HADOOP-6350.5.patch, 
 sample1.png


 Metrics should be part of public API, and should be clearly documented 
 similar to HADOOP-5073, so that we can reliably build tools on top of them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)