[jira] [Commented] (HADOOP-8274) In pseudo or cluster model under Cygwin, tasktracker can not create a new job because of symlink problem.

2013-01-14 Thread JY chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552525#comment-13552525
 ] 

JY chen commented on HADOOP-8274:
-

I found there are some problems the LinkedFile.java above in my windows 7. When 
the stdout/stderr/syslog are created under tmp folder and hadoop create a linux 
symlink to corresponding files in local 
logs/userlogs/job_2/attempt_X/stdout. However, this folder
logs/userlogs/job_2/attempt_X is a symlink and stdout is a file in 
a symlink directory.

I modify LinkeFile.java as



public LinkedFile(File parent, String child)
{
//work around when parent is linked symbol
super(resolveFile(new File(resolveFile(new 
File(parent.toString())).getAbsolutePath()
  , child)).getAbsolutePath());
this.target = getAbsolutePath();



public LinkedFile(String parent, String child)
{
//work around when parent is linked symbol
super(resolveFile(new File(resolveFile(new 
File(parent)).getAbsolutePath()



that's only work around for this case

 In pseudo or cluster model under Cygwin, tasktracker can not create a new job 
 because of symlink problem.
 -

 Key: HADOOP-8274
 URL: https://issues.apache.org/jira/browse/HADOOP-8274
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0, 1.0.0, 1.0.1, 0.22.0
 Environment: windows7+cygwin 1.7.11-1+jdk1.6.0_31+hadoop 1.0.0
Reporter: tim.wu

 The standalone model is ok. But, in pseudo or cluster model, it always throw 
 errors, even I just run wordcount example.
 The HDFS works fine, but tasktracker can not create threads(jvm) for new job. 
  It is empty under /logs/userlogs/job-/attempt-/.
 The reason looks like that in windows, Java can not recognize a symlink of 
 folder as a folder. 
 The detail description is as following,
 ==
 First, the error log of tasktracker is like:
 ==
 12/03/28 14:35:13 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: 
 jvm_201203280212_0005_m_-1386636958
 12/03/28 14:35:13 INFO mapred.JvmManager: JVM Runner 
 jvm_201203280212_0005_m_-1386636958 spawned.
 12/03/28 14:35:17 INFO mapred.JvmManager: JVM Not killed 
 jvm_201203280212_0005_m_-1386636958 but just removed
 12/03/28 14:35:17 INFO mapred.JvmManager: JVM : 
 jvm_201203280212_0005_m_-1386636958 exited with exit code -1. Number of tasks 
 it ran: 0
 12/03/28 14:35:17 WARN mapred.TaskRunner: 
 attempt_201203280212_0005_m_02_0 : Child Error
 java.io.IOException: Task process exit with nonzero status of -1.
 at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
 12/03/28 14:35:21 INFO mapred.TaskTracker: addFreeSlot : current free slots : 
 2
 12/03/28 14:35:24 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): 
 attempt_201203280212_0005_m_02_1 task's state:UNASSIGNED
 12/03/28 14:35:24 INFO mapred.TaskTracker: Trying to launch : 
 attempt_201203280212_0005_m_02_1 which needs 1 slots
 12/03/28 14:35:24 INFO mapred.TaskTracker: In TaskLauncher, current free 
 slots : 2 and trying to launch attempt_201203280212_0005_m_02_1 which 
 needs 1 slots
 12/03/28 14:35:24 WARN mapred.TaskLog: Failed to retrieve stdout log for 
 task: attempt_201203280212_0005_m_02_0
 java.io.FileNotFoundException: 
 D:\cygwin\home\timwu\hadoop-1.0.0\logs\userlogs\job_201203280212_0005\attempt_201203280212_0005_m_02_0\log.index
  (The system cannot find the path specified)
 at java.io.FileInputStream.open(Native Method)
 at java.io.FileInputStream.init(FileInputStream.java:120)
 at 
 org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
 at 
 org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:188)
 at org.apache.hadoop.mapred.TaskLog$Reader.init(TaskLog.java:423)
 at 
 org.apache.hadoop.mapred.TaskLogServlet.printTaskLog(TaskLogServlet.java:81)
 at 
 org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 

[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-14 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552530#comment-13552530
 ] 

Vadim Bondarev commented on HADOOP-9199:


add description in issue or in test source code ?

 Cover package org.apache.hadoop.io with unit tests
 --

 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9199-branch-0.23-a.patch, 
 HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8274) In pseudo or cluster model under Cygwin, tasktracker can not create a new job because of symlink problem.

2013-01-14 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8274.
-

Resolution: Won't Fix

For Windows, since the mainstream branch does not support it actively, am 
closing this as a Won't Fix.

I'm certain the same issue does not happen on the branch-1-win 1.x branch (or 
the branch-trunk-win branch), and I urge you to use that instead if you wish to 
continue using Windows for development or other usage. Find the 
Windows-optimized sources at 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-1-win/ or 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-trunk-win/.

 In pseudo or cluster model under Cygwin, tasktracker can not create a new job 
 because of symlink problem.
 -

 Key: HADOOP-8274
 URL: https://issues.apache.org/jira/browse/HADOOP-8274
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0, 1.0.0, 1.0.1, 0.22.0
 Environment: windows7+cygwin 1.7.11-1+jdk1.6.0_31+hadoop 1.0.0
Reporter: tim.wu

 The standalone model is ok. But, in pseudo or cluster model, it always throw 
 errors, even I just run wordcount example.
 The HDFS works fine, but tasktracker can not create threads(jvm) for new job. 
  It is empty under /logs/userlogs/job-/attempt-/.
 The reason looks like that in windows, Java can not recognize a symlink of 
 folder as a folder. 
 The detail description is as following,
 ==
 First, the error log of tasktracker is like:
 ==
 12/03/28 14:35:13 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: 
 jvm_201203280212_0005_m_-1386636958
 12/03/28 14:35:13 INFO mapred.JvmManager: JVM Runner 
 jvm_201203280212_0005_m_-1386636958 spawned.
 12/03/28 14:35:17 INFO mapred.JvmManager: JVM Not killed 
 jvm_201203280212_0005_m_-1386636958 but just removed
 12/03/28 14:35:17 INFO mapred.JvmManager: JVM : 
 jvm_201203280212_0005_m_-1386636958 exited with exit code -1. Number of tasks 
 it ran: 0
 12/03/28 14:35:17 WARN mapred.TaskRunner: 
 attempt_201203280212_0005_m_02_0 : Child Error
 java.io.IOException: Task process exit with nonzero status of -1.
 at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
 12/03/28 14:35:21 INFO mapred.TaskTracker: addFreeSlot : current free slots : 
 2
 12/03/28 14:35:24 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): 
 attempt_201203280212_0005_m_02_1 task's state:UNASSIGNED
 12/03/28 14:35:24 INFO mapred.TaskTracker: Trying to launch : 
 attempt_201203280212_0005_m_02_1 which needs 1 slots
 12/03/28 14:35:24 INFO mapred.TaskTracker: In TaskLauncher, current free 
 slots : 2 and trying to launch attempt_201203280212_0005_m_02_1 which 
 needs 1 slots
 12/03/28 14:35:24 WARN mapred.TaskLog: Failed to retrieve stdout log for 
 task: attempt_201203280212_0005_m_02_0
 java.io.FileNotFoundException: 
 D:\cygwin\home\timwu\hadoop-1.0.0\logs\userlogs\job_201203280212_0005\attempt_201203280212_0005_m_02_0\log.index
  (The system cannot find the path specified)
 at java.io.FileInputStream.open(Native Method)
 at java.io.FileInputStream.init(FileInputStream.java:120)
 at 
 org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
 at 
 org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:188)
 at org.apache.hadoop.mapred.TaskLog$Reader.init(TaskLog.java:423)
 at 
 org.apache.hadoop.mapred.TaskLogServlet.printTaskLog(TaskLogServlet.java:81)
 at 
 org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 

[jira] [Updated] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9200:
---

Status: Patch Available  (was: Open)

 enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
 

 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9200-trunk.patch


 The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
 coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552572#comment-13552572
 ] 

Hadoop QA commented on HADOOP-9200:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12564475/HADOOP-9200-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2035//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2035//console

This message is automatically generated.

 enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
 

 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9200-trunk.patch


 The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
 coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9182) the buffer used in hdfsRead seems leaks when the thread exits

2013-01-14 Thread dingyichuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552578#comment-13552578
 ] 

dingyichuan commented on HADOOP-9182:
-

Hi, Colin Patrick McCabe 
Actually, I have no idea of the version info. Our team began to use the lib 
several years ago and have patched to the lib many times. I am new to this 
group, so... sorry :P

 the buffer used in hdfsRead seems leaks when the thread exits
 -

 Key: HADOOP-9182
 URL: https://issues.apache.org/jira/browse/HADOOP-9182
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
 Environment: Linux RHEP x64 
Reporter: dingyichuan

 I use multi-threads in my c++ program to download 3000 files in HDFS use 
 libhdfs. Every thread is created by pthread_create to download a file and 
 exit. We monitor the memory status when the program is running. It seems 
 every thread will create a buffer which size is specified by the buffersize 
 parameter in hdfsOpenFile function. But when the thread finish the task and 
 exit, it doesn't free the buffer. So our program will eventually abort by 
 Java's out of memory exception. I just don't know how to free the buffer or 
 I use these functions in wrong way. Thanks!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9204) fix apacheds distribution download link URL

2013-01-14 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9204:
--

 Summary: fix apacheds distribution download link URL
 Key: HADOOP-9204
 URL: https://issues.apache.org/jira/browse/HADOOP-9204
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky


Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs 
modules with startKdc profile.
The build script downloads the server, unpacks it, configures, and runs.

The problem is that used URL 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
 does not work any more (returns 404).

The suggested patch peremetrizes the URL, so that it can be set in single palce 
in the parent pom.xml, and sets it to the workable value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9204) fix apacheds distribution download link URL

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9204:
---

Attachment: HADOOP-9204-trunk.patch

The patch is for trunk branch only.

 fix apacheds distribution download link URL
 ---

 Key: HADOOP-9204
 URL: https://issues.apache.org/jira/browse/HADOOP-9204
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9204-trunk.patch


 Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs 
 modules with startKdc profile.
 The build script downloads the server, unpacks it, configures, and runs.
 The problem is that used URL 
 http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  does not work any more (returns 404).
 The suggested patch peremetrizes the URL, so that it can be set in single 
 palce in the parent pom.xml, and sets it to the workable value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9204) fix apacheds distribution download link URL

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9204:
---

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

 fix apacheds distribution download link URL
 ---

 Key: HADOOP-9204
 URL: https://issues.apache.org/jira/browse/HADOOP-9204
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9204-trunk.patch


 Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs 
 modules with startKdc profile.
 The build script downloads the server, unpacks it, configures, and runs.
 The problem is that used URL 
 http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  does not work any more (returns 404).
 The suggested patch peremetrizes the URL, so that it can be set in single 
 palce in the parent pom.xml, and sets it to the workable value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552587#comment-13552587
 ] 

Ivan A. Veselovsky commented on HADOOP-9200:


The patch verification seems to be failed because 
org.apache.hadoop.ha.TestZKFailoverController is a flaky test.
The patch does not seem to affect this test anyhow.

 enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
 

 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9200-trunk.patch


 The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
 coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9205:
--

 Summary: Java7: path to native libraries should be passed to tests 
via -Djava.library.path rather than env.LD_LIBRARY_PATH
 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9205:
---

  Description: 
Currently the path to native libraries is passed to unit tests via environment 
variable LD_LIBRARTY_PATH. This is okay for Java6, but does not work for Java7, 
since Java7 ignores this environment variable.

So, to run the tests with native implementation on Java7 one needs to pass the 
paths to native libs via -Djava.library.path system property rather than the 
LD_LIBRARY_PATH env variable.

The suggested patch fixes the problem via setting the paths to native libs 
using both LD_LIBRARY_PATH and -Djava.library.path property. This way the tests 
work equally on both Java6 and Java7.
Affects Version/s: 0.23.6
   2.0.3-alpha
   3.0.0

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky

 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9205:
---

Attachment: HADOOP-9205.patch

The patch is applicable to all the 3 affected branches.

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9205:
---

Status: Patch Available  (was: Open)

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-14 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552617#comment-13552617
 ] 

Luke Lu commented on HADOOP-9194:
-

For specific use cases, especially for configurations you can control, a 
separate unix domain socket could be a reasonable hack. That said if we have a 
nonblocking RPC reader implementation we can do a better job than OS accept 
backlog. In general, we don't want to have any queues that we cannot 
control/influence.

This actual brings up a serious security issue with the current RPC 
implemenation, it's trivial for any (low bandwidth) client to DoS any Hadoop 
RPC service (even with unlimited bandwidth) either deliberately or by accident. 
In order to fix this critical issue we need to have nonblocking readers. As 
Binglin pointed out on HADOOP-9151, the current protobuf RPC protocol is not 
amenable to nonblocking implementations.

I propose that we fix this here once for all as well:
{code}
request ::= request-envelope request-protobuf-payload
request-envelope ::= 'HREQ' service-class-int8 
request-protobuf-payload-length-vint32

response ::= response-envelope response-protobuf-payload
response-envelope ::= 'HRES' service-class-int8 
response-protobuf-payload-length-vint32
{code}

The new envelopes make nonblocking network IO trivial for rpc 
server/proxy/switches. The 'magic' 4-bytes makes debugging tcpdump and/or 
adding support to wireshark easier as well.

 RPC Support for QoS
 ---

 Key: HADOOP-9194
 URL: https://issues.apache.org/jira/browse/HADOOP-9194
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc
Affects Versions: 2.0.2-alpha
Reporter: Luke Lu

 One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
 We need QoS support to fight the inevitable buffer bloat (including various 
 queues, which are probably necessary for throughput) in our software stack. 
 This is important for mixed workload with different latency and throughput 
 requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
 same DFS.
 Any potential bottleneck will need to be managed by QoS mechanisms, starting 
 with RPC. 
 How about adding a one byte DS (differentiated services) field (a la the 
 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
 mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
 the header is helpful for implementing high performance QoS mechanisms in 
 switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552620#comment-13552620
 ] 

Hadoop QA commented on HADOOP-9205:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564693/HADOOP-9205.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2037//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2037//console

This message is automatically generated.

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2013-01-14 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552623#comment-13552623
 ] 

Luke Lu commented on HADOOP-9151:
-

Binglin's point 3 actually brings up a serious security issue with the current 
RPC protocol/implementation: it's trivial for any (low-bandwidth) client to DoS 
any Hadoop RPC service (with unlimited-bandwidth). The fact the current 
protocol is not amenable to nonblocking reader is a serious issue that needs to 
be fixed ASAP.

I proposed a simple fix in HADOOP-9194.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-14 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552639#comment-13552639
 ] 

Vadim Bondarev commented on HADOOP-9199:


Classes   Method  Desc
  
TestArrayFile   testArrayFileIteration  test ArrayFile.Reader 
methods next(); reader reader.seek() in range and out of range.

TestArrayWritable   testArrayWritableStringConstructor  test ArrayWritable 
constructor with String[] as a parameter
testNullArgumenttest TextArrayWritable 
with null parameter
testArrayWritableToArraytest TextArrayWritable 
toArray() method


TestBloomMapFiletestBloomMapFileConstructorstest all 
BloomMapFile.Writer available constructors.
testDeleteFile  test method 
BloomMapFile.delete.  
testGetBloomMapFile test 
BloomMapFile.Reader method get(Writable wr) in range and out of range.
testIOExceptionInWriterConstructor  test 
BloomMapFile.Reader constructor with throw IOException in 
filesystem.getFileSystem(conf);

TestBooleanWritable testCommonMethods   test methods 
hashCode(), equals(), compareTo() against instance of type BooleanWritable

TestBytesWritable   testObjectCommonMethods test methods 
compareTo(), toString(), equals() against instance of type ByteWritable


TestCompressedWritable testCompressedWritableWriteHeader test 
CompressedWritable write(DataOutputBuffer) method 
   testCompressedWritableReadFields  test 
CompressedWritable readFields() method

TestEnumSetWritabletestEnumSetWritableEquals test equals() method 
against instance of type EnumSetWritable
   testEnumSetWritableWriteRead  test EnumSetWritable 
write(DataOutputBuffer out) and iteration by TestEnumSet through iterator().


TestMapFile
   testDeprecatedConstructorstest all available 
constructor of type MapFile.Writer.
   testDescOrderWithThrowExceptionWriterAppend test 
MapFile.Writer method append(Writable wr) in desc order (2, 1);
   testFix   test MapFile.fix() 
method which attempts to re-creating MapFile index.
   testGetClosestOnCurrentApitest method 
reader.getClosest(WritableComparable wrc) variations.
   testKeyLessWriterCreation test MapFile.Writer 
constructor without varargs paramereters (SequenceFile.Writer.Option... opts).
   testKeyValueClasses   test on verification 
key and value classes for MapFile.Writer type.
   testMainMethodMapFile test for static void 
main() method.
   testMidKeyOnCurrentApitest MapFile.Reader 
method reader.midKey() which get key in the middle of the file.
   testOnFinalKeytest MapFile.Reader 
method reader.finalKey() which get key in the end of file.
   testPathExplosionWriterCreation   test IOException in 
MapFile.Writer constructor
   testReaderKeyIterationtest MapFile.Reader 
method reader.next(key, value) for iteration.
   testReaderWithWrongKeyClass   test MapFile.Reader 
method reader.getClosest() whith wrond class key
   testReaderWithWrongValueClass test MapFile.Writer 
method writer.append() with wrong type key instance
   testRenametest MapFile.rename() 
method
   testRenameWithException   test MapFile.rename() 
method with throw IOException 
   testRenameWithFalse   test MapFile.rename() 
method with FleSystem.rename return false
   testWriteWithFailDirCreation  test MapFile.Writer 
constructor with throw IOException
TestMultipleIOException
   testEmptyParamIOException test 
MultipleIOException.createIOException() method
   testSingleParamIOExceptiontest 
MultipleIOException.testSingleParamIOException() method 
   testMultipleIOException   test 
MultipleIOException.createIOException() method

TestNullWritable
   testNullableWritable  test uncovered methods 
in class NullWritable

TestOutputBuffer   testOutputBufferWithoutResize test OutputBuffer 
methods write(InputStream in, int), out.getData()
   testOutputBufferReset test just like 
previous only with adding out.reset() call

TestSetFiletestSetFileAccessMethods  test 

[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-14 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Attachment: Test_Desc

test method description

 Cover package org.apache.hadoop.io with unit tests
 --

 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9199-branch-0.23-a.patch, 
 HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch, Test_Desc




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-14 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552647#comment-13552647
 ] 

Vadim Bondarev commented on HADOOP-9199:


look at test method description in attach

 Cover package org.apache.hadoop.io with unit tests
 --

 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9199-branch-0.23-a.patch, 
 HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch, Test_Desc




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552648#comment-13552648
 ] 

Hadoop QA commented on HADOOP-9199:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564699/Test_Desc
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2038//console

This message is automatically generated.

 Cover package org.apache.hadoop.io with unit tests
 --

 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9199-branch-0.23-a.patch, 
 HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch, Test_Desc




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-14 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Attachment: (was: Test_Desc)

 Cover package org.apache.hadoop.io with unit tests
 --

 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9199-branch-0.23-a.patch, 
 HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9204) fix apacheds distribution download link URL

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552675#comment-13552675
 ] 

Hadoop QA commented on HADOOP-9204:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12564687/HADOOP-9204-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2036//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2036//console

This message is automatically generated.

 fix apacheds distribution download link URL
 ---

 Key: HADOOP-9204
 URL: https://issues.apache.org/jira/browse/HADOOP-9204
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9204-trunk.patch


 Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs 
 modules with startKdc profile.
 The build script downloads the server, unpacks it, configures, and runs.
 The problem is that used URL 
 http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  does not work any more (returns 404).
 The suggested patch peremetrizes the URL, so that it can be set in single 
 palce in the parent pom.xml, and sets it to the workable value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552694#comment-13552694
 ] 

Hudson commented on HADOOP-9097:


Integrated in Hadoop-trunk-Commit #3224 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3224/])
HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432934)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1432934
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/appendix.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/architecture.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/cli.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/index.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/usage.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/resources/sslConfig.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word-part.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-gdb-commands.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-script
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordList.java
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-14 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-14 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552772#comment-13552772
 ] 

Robert Joseph Evans commented on HADOOP-9202:
-

The change looks fine to me +1

I'll check it in.  Thanks for finding this.

 test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
 the build
 --

 Key: HADOOP-9202
 URL: https://issues.apache.org/jira/browse/HADOOP-9202
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9202.1.patch


 test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
 runs this before running mvn install.  The mvn eclipse:eclipse command 
 doesn't actually build the code, so if the patch in question is adding a 
 whole new module, then any other modules dependent on finding it in the 
 reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-14 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9202:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I put this into trunk

 test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
 the build
 --

 Key: HADOOP-9202
 URL: https://issues.apache.org/jira/browse/HADOOP-9202
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9202.1.patch


 test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
 runs this before running mvn install.  The mvn eclipse:eclipse command 
 doesn't actually build the code, so if the patch in question is adding a 
 whole new module, then any other modules dependent on finding it in the 
 reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552779#comment-13552779
 ] 

Hudson commented on HADOOP-9202:


Integrated in Hadoop-trunk-Commit #3225 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3225/])
HADOOP-9202. test-patch.sh fails during mvn eclipse:eclipse if patch adds a 
new module to the build (Chris Nauroth via bobby) (Revision 1432949)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1432949
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
 the build
 --

 Key: HADOOP-9202
 URL: https://issues.apache.org/jira/browse/HADOOP-9202
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9202.1.patch


 test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
 runs this before running mvn install.  The mvn eclipse:eclipse command 
 doesn't actually build the code, so if the patch in question is adding a 
 whole new module, then any other modules dependent on finding it in the 
 reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552822#comment-13552822
 ] 

Kihwal Lee commented on HADOOP-9205:


Would you elaborate what is failing and how? We run oracle/sun java 1.7.0_05 
and JNI libraries are loading fine with only LD_LIBRARY_PATH set. Which jdk are 
you using?

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9181) Set daemon flag for HttpServer's QueuedThreadPool

2013-01-14 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9181:
--

Fix Version/s: 0.23.6

merged to branch-0.23

 Set daemon flag for HttpServer's QueuedThreadPool
 -

 Key: HADOOP-9181
 URL: https://issues.apache.org/jira/browse/HADOOP-9181
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: liang xie
Assignee: liang xie
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9181.txt


 we hit HBASE-6031 again, after looking into thread dump, it was caused by the 
 threads from QueuedThreadPool are user thread, not daemon thread, so the 
 hbase shutdownhook never be called and the hbase instance was hung.
 Furthermore, i saw daemon be set in fb-20 branch, let's set in trunk codebase 
 as well, it should be safe:)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Status: Patch Available  (was: Open)

HADOOP-9202 just got committed.  (Thank you, [~revans2].)  I'm resubmitting the 
v4 patch to Jenkins.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.patch, 
 HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552865#comment-13552865
 ] 

Hadoop QA commented on HADOOP-8924:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564579/HADOOP-8924.4.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2039//console

This message is automatically generated.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.patch, 
 HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Status: Open  (was: Patch Available)

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.patch, 
 HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924-branch-trunk-win.5.patch

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924.5.patch

I'm attaching version 5 of the patch to rebase it against recent trunk changes.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Status: Patch Available  (was: Open)

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9197) Some little confusion in official documentation

2013-01-14 Thread Glen Mazza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552896#comment-13552896
 ] 

Glen Mazza commented on HADOOP-9197:


I don't see the problem--who says the documentation for different versions of 
Hadoop must be the same?  That's as nonsensical as saying the source code has 
to be identical for different versions.  What's the purpose of versions if you 
can't improve the source code and documentation over time?  It doesn't matter 
that version 1.0 says X, version 2.0 says Y', and version 3.0 says Z--it 
only matters if version 3.0's Z is incorrect.  Jason needs to pick a single 
version of Hadoop he wishes to work on and focus on that version's 
documentation and ignore the others.  If he finds bugs in that version's docs 
then to submit a JIRA over them--not a JIRA because the documentation, like the 
source code, has changed across versions.

 Some little confusion in official documentation
 ---

 Key: HADOOP-9197
 URL: https://issues.apache.org/jira/browse/HADOOP-9197
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jason Lee
Priority: Trivial
   Original Estimate: 336h
  Remaining Estimate: 336h

 I am just a newbie to Hadoop. recently i self-study hadoop. when i reading 
 the official documentations, i find that them is a little confusion by 
 beginners like me. for example, look at the documents about HDFS shell guide:
 In 0.17, the prefix of HDFS shell is hadoop dfs:
 http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
 In 0.19, the prefix of HDFS shell is hadoop fs:
 http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
 In 1.0.4,the prefix of HDFS shell is hdfs dfs:
 http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
 As a beginner, i think reading them is suffering.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-14 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552921#comment-13552921
 ] 

Luke Lu commented on HADOOP-9194:
-

We can probably make the response payload length optional (0 means streaming).

 RPC Support for QoS
 ---

 Key: HADOOP-9194
 URL: https://issues.apache.org/jira/browse/HADOOP-9194
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc
Affects Versions: 2.0.2-alpha
Reporter: Luke Lu

 One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
 We need QoS support to fight the inevitable buffer bloat (including various 
 queues, which are probably necessary for throughput) in our software stack. 
 This is important for mixed workload with different latency and throughput 
 requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
 same DFS.
 Any potential bottleneck will need to be managed by QoS mechanisms, starting 
 with RPC. 
 How about adding a one byte DS (differentiated services) field (a la the 
 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
 mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
 the header is helpful for implementing high performance QoS mechanisms in 
 switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552931#comment-13552931
 ] 

Hadoop QA commented on HADOOP-8924:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564737/HADOOP-8924.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-maven-plugins 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2040//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2040//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2040//console

This message is automatically generated.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9206) Setting up a Single Node Cluster instructions need improvement in 0.23.5/2.0.2-alpha branches

2013-01-14 Thread Glen Mazza (JIRA)
Glen Mazza created HADOOP-9206:
--

 Summary: Setting up a Single Node Cluster instructions need 
improvement in 0.23.5/2.0.2-alpha branches
 Key: HADOOP-9206
 URL: https://issues.apache.org/jira/browse/HADOOP-9206
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.23.5, 2.0.2-alpha
Reporter: Glen Mazza


Hi, in contrast to the easy-to-follow 1.0.4 instructions 
(http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
2.0.2-alpha instructions 
(http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
 need more clarification -- it seems to be written for people who already know 
and understand hadoop.  In particular, these points need clarification:

1.) Text: You should be able to obtain the MapReduce tarball from the release.

Question: What is the MapReduce tarball?  What is its name?  I don't see such 
an object within the hadoop-0.23.5.tar.gz download.

2.) Quote: NOTE: You will need protoc installed of version 2.4.1 or greater.

Protoc doesn't have a website you can link to (it's just mentioned offhand when 
you Google it) -- is it really the case today that Hadoop has a dependency on 
such a minor project?  At any rate, if you can have a link of where one goes to 
get/install Protoc that would be good.

3.) Quote: Assuming you have installed hadoop-common/hadoop-hdfs and exported 
$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set 
environment variable $HADOOP_MAPRED_HOME to the untarred directory.

I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean  (install both) or 
*or* just install one of the two?  This needs clarification--please remove the 
forward slash and replace it with what you're trying to say.  The audience here 
is complete newbie and they've been brought to this page from here: 
http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) (quote: 
Getting Started - The Hadoop documentation includes the information you need 
to get started using Hadoop. Begin with the Single Node Setup which shows you 
how to set up a single-node Hadoop installation.), they've downloaded 
hadoop-0.23.5.tar.gz and want to know what to do next.  Why are there 
potentially two applications -- hadoop-common and hadoop-hdfs and not just one? 
 (The download doesn't appear to have two separate apps) -- if there is indeed 
just one app and we remove the other from the above text to avoid confusion?

Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more?  
If so, let us know in the docs here.

Also, the fragment: Assuming you have installed hadoop-common/hadoop-hdfs...  
No, I haven't, that's what *this* page is supposed to explain to me how to do 
-- how do I install these two (or just one of these two)?

Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to?

4.) Quote: NOTE: The following instructions assume you have hdfs running.  
No, I don't--how do I do this?  Again, this page is supposed to teach me that.

5.) Quote: To start the ResourceManager and NodeManager, you will have to 
update the configs. Assuming your $HADOOP_CONF_DIR is the configuration 
directory...

Could you clarify here what the configuration directory is, it doesn't exist 
in the 0.23.5 download.  I just see bin,etc,include,lib,libexec,sbin,share 
folders but no conf one.)

6.) Quote: Assuming that the environment variables $HADOOP_COMMON_HOME, 
$HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and 
$HADOOP_CONF_DIR have been set appropriately.

We'll need to know what to set YARN_HOME to here.

Thanks!
Glen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9204) fix apacheds distribution download link URL

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553031#comment-13553031
 ] 

Ivan A. Veselovsky commented on HADOOP-9204:


Failed tests in patch verification are not anyhow related to the patch. They 
are likely failed due to flakiness.
The patch does not introduce any new tests because it should be tested with 
existing tests that use the downloaded apacheds server.

 fix apacheds distribution download link URL
 ---

 Key: HADOOP-9204
 URL: https://issues.apache.org/jira/browse/HADOOP-9204
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9204-trunk.patch


 Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs 
 modules with startKdc profile.
 The build script downloads the server, unpacks it, configures, and runs.
 The problem is that used URL 
 http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  does not work any more (returns 404).
 The suggested patch peremetrizes the URL, so that it can be set in single 
 palce in the parent pom.xml, and sets it to the workable value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553038#comment-13553038
 ] 

Ivan A. Veselovsky commented on HADOOP-9205:


I observed the problem with Oracle's JDK 1.7.0_10.
The tests are not failing, they just use Java implementation instead of the 
native one, even if -Pnative profile is enabled.
Links that seem to be relevant there are:
http://www.oracle.com/technetwork/java/javase/jdk7-relnotes-418459.html
https://blogs.oracle.com/darcy/entry/purging_ld_library_path
I will investigate if the problem is reproducible with J1.7.0_05.

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9206) Setting up a Single Node Cluster instructions need improvement in 0.23.5/2.0.2-alpha branches

2013-01-14 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553092#comment-13553092
 ] 

Andy Isaacson commented on HADOOP-9206:
---

Note that the docs are being converted from XDOC to APT; see HADOOP-8427 and 
HADOOP-9190. So please convert {{single_node_setup.xml}} to APT before editing 
the content, if at all possible.

 Setting up a Single Node Cluster instructions need improvement in 
 0.23.5/2.0.2-alpha branches
 ---

 Key: HADOOP-9206
 URL: https://issues.apache.org/jira/browse/HADOOP-9206
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.2-alpha, 0.23.5
Reporter: Glen Mazza

 Hi, in contrast to the easy-to-follow 1.0.4 instructions 
 (http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
 2.0.2-alpha instructions 
 (http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
  need more clarification -- it seems to be written for people who already 
 know and understand hadoop.  In particular, these points need clarification:
 1.) Text: You should be able to obtain the MapReduce tarball from the 
 release.
 Question: What is the MapReduce tarball?  What is its name?  I don't see such 
 an object within the hadoop-0.23.5.tar.gz download.
 2.) Quote: NOTE: You will need protoc installed of version 2.4.1 or greater.
 Protoc doesn't have a website you can link to (it's just mentioned offhand 
 when you Google it) -- is it really the case today that Hadoop has a 
 dependency on such a minor project?  At any rate, if you can have a link of 
 where one goes to get/install Protoc that would be good.
 3.) Quote: Assuming you have installed hadoop-common/hadoop-hdfs and 
 exported $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce 
 tarball and set environment variable $HADOOP_MAPRED_HOME to the untarred 
 directory.
 I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
 and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean  (install both) or 
 *or* just install one of the two?  This needs clarification--please remove 
 the forward slash and replace it with what you're trying to say.  The 
 audience here is complete newbie and they've been brought to this page from 
 here: http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) 
 (quote: Getting Started - The Hadoop documentation includes the information 
 you need to get started using Hadoop. Begin with the Single Node Setup which 
 shows you how to set up a single-node Hadoop installation.), they've 
 downloaded hadoop-0.23.5.tar.gz and want to know what to do next.  Why are 
 there potentially two applications -- hadoop-common and hadoop-hdfs and not 
 just one?  (The download doesn't appear to have two separate apps) -- if 
 there is indeed just one app and we remove the other from the above text to 
 avoid confusion?
 Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more?  
 If so, let us know in the docs here.
 Also, the fragment: Assuming you have installed 
 hadoop-common/hadoop-hdfs...  No, I haven't, that's what *this* page is 
 supposed to explain to me how to do -- how do I install these two (or just 
 one of these two)?
 Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to?
 4.) Quote: NOTE: The following instructions assume you have hdfs running.  
 No, I don't--how do I do this?  Again, this page is supposed to teach me that.
 5.) Quote: To start the ResourceManager and NodeManager, you will have to 
 update the configs. Assuming your $HADOOP_CONF_DIR is the configuration 
 directory...
 Could you clarify here what the configuration directory is, it doesn't 
 exist in the 0.23.5 download.  I just see 
 bin,etc,include,lib,libexec,sbin,share folders but no conf one.)
 6.) Quote: Assuming that the environment variables $HADOOP_COMMON_HOME, 
 $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and 
 $HADOOP_CONF_DIR have been set appropriately.
 We'll need to know what to set YARN_HOME to here.
 Thanks!
 Glen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553093#comment-13553093
 ] 

Todd Lipcon commented on HADOOP-8712:
-

Anything holding this up? Looks ready to go as of late August. I'll commit it 
based on the earlier +1s unless I hear any objections.

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch


 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553099#comment-13553099
 ] 

Eli Collins commented on HADOOP-9178:
-

bq. I removed the deprecated ones - should I leave them in?

Removing them is correct. Two of them already have deprecations added to 
ConfigUtil.java, there's no equivalent deprecation for 
security.inter.tracker.protocol.acl (security.containermanager.protocol.acl is 
closest but since it's yarn only probably doesn't need to be added as a 
deprecation here).

Patch looks good, thanks for all the cleanup!  One nit: three of the configs 
(security.ha.service.protocol.acl, security.zkfc.protocol.acl, are 
security.qjournal.service.protocol.acl) in the yarn section of 
hadoop-policy.xml, please move them up next to the hdfs configs and also fix 
the formatting of security.mrhs.client.protocol.acl.

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-14 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553102#comment-13553102
 ] 

Eli Collins commented on HADOOP-8712:
-

+1 from me, I thought this had gone in already.

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch


 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8427) Convert Forrest docs to APT, incremental

2013-01-14 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553110#comment-13553110
 ] 

Eli Collins commented on HADOOP-8427:
-

bq. In fact, I suppose the right thing is simply to leave xdocs/*.xml in place 
(unused), adding apt.vm versions as they're converted, then deleting the unused 
.xml after they are completely redundant. I'll post a new patch to that effect.

Is there a jira that tracks the remaining work? Noticed there's still an xdocs 
directory.


 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553127#comment-13553127
 ] 

Chris Nauroth commented on HADOOP-8924:
---

The release audit failures are unrelated to this patch:

{noformat}
 !? 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg
 !? 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
Lines that start with ? in the release audit report indicate files that do 
not have an Apache license header.
{noformat}

It seems like rat can't figure out that these are binary files.  I haven't been 
able to repro the problem on any of my own machines.  This seems to be causing 
a problem for some other patches too.

The failure in {{TestZKFailoverController}} is unrelated.  This test has been 
flaky lately, failing on a few other patches.


 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553131#comment-13553131
 ] 

Matt Foley commented on HADOOP-8924:


Hi Chris and Alejandro, it's fine with me to test on Mac, Win, and Ubuntu, for 
consistent MD5, and I agree it isn't strictly necessary to have the same MD5 as 
saveVersion.sh.

But I'm concerned about:
{quote}
One thing I've forgot to mention is that currently (and with this patch) MD5 
are done only for the sources in common and in yarn. And the VersionInfo from 
common is used in hdfs. IMO, we should either have a global MD5  VersionInfo 
for the whole project or one per module. This is out of scope of this JIRA, 
just wanted to bring it up.
{quote}

I didn't notice because I was focusing on the Hadoop-1 version where I'm more 
familiar with the env.  In Hadoop-1 there is simply one checksum for the whole 
project.

Absent that, I think the checksum for hdfs in Hadoop-2 should be created by 
summing the MD5 for the hdfs sub-project sources, so each sub-project sums its 
own sources (as do common and yarn).  The most important use of the checksum is 
to enforce the constraint that all the servers talking to each other be running 
code compiled from the same source tree.  And that is clearly important to 
HDFS, and won't be enforced with the current scheme.

Agree if this behavior was already in the Hadoop-2 code we don't _have_ to fix 
it here, but if it would be a simple change, I would support fixing it here.  
If not, please open a bug for it.  Thanks.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9207:
-

 Summary: version info source checksum does not include all source 
files
 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth


The build process takes an MD5 checksum of the source files in Common and YARN. 
 The HDFS version info command prints the checksum from Common.  The YARN 
version info command prints the checksum from YARN.  This is incomplete in that 
the HDFS source code is never included in the checksum, and 2 different YARN 
builds with the same YARN code but different Common code would have the same 
checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553154#comment-13553154
 ] 

Chris Nauroth commented on HADOOP-9207:
---

See HADOOP-8924 for some earlier discussion on this topic.  We could either 
calculate a single MD5 checksum across the whole project and use it for both 
HDFS and YARN, or calculate separate checksums per module, but with inclusion 
of the Common code in each one.


 version info source checksum does not include all source files
 --

 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth

 The build process takes an MD5 checksum of the source files in Common and 
 YARN.  The HDFS version info command prints the checksum from Common.  The 
 YARN version info command prints the checksum from YARN.  This is incomplete 
 in that the HDFS source code is never included in the checksum, and 2 
 different YARN builds with the same YARN code but different Common code would 
 have the same checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553160#comment-13553160
 ] 

Chris Nauroth commented on HADOOP-8924:
---

Thanks, Matt.  I just created HADOOP-9207 to address it separately.

{quote}
The most important use of the checksum is to enforce the constraint that all 
the servers talking to each other be running code compiled from the same source 
tree.
{quote}

Considering this, I think the best solution is to calculate a single global 
checksum for the whole project and use the same value across all modules.  
That's going to take some restructuring, and I'd prefer to address it in a 
separate jira instead of holding up this one.

Alejandro, I think this is all set as long as you are OK with the version 5 
patch and my justifications on the tests and release audit warnings.  Let me 
know what you think.


 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553161#comment-13553161
 ] 

Chris Nauroth commented on HADOOP-9207:
---

Quoting an earlier comment from HADOOP-8924:

{quote}
The most important use of the checksum is to enforce the constraint that all 
the servers talking to each other be running code compiled from the same source 
tree.
{quote}

Considering this, I think the best solution is to calculate a single global 
checksum for the whole project and use the same value across all modules.


 version info source checksum does not include all source files
 --

 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth

 The build process takes an MD5 checksum of the source files in Common and 
 YARN.  The HDFS version info command prints the checksum from Common.  The 
 YARN version info command prints the checksum from YARN.  This is incomplete 
 in that the HDFS source code is never included in the checksum, and 2 
 different YARN builds with the same YARN code but different Common code would 
 have the same checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8274) In pseudo or cluster model under Cygwin, tasktracker can not create a new job because of symlink problem.

2013-01-14 Thread FKorning (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553191#comment-13553191
 ] 

FKorning commented on HADOOP-8274:
--

Yes, You'll need to make LinkedFile recursively traverse through symlinks.
I just did a quick hack to get it to resolve the last basename as a link.


 In pseudo or cluster model under Cygwin, tasktracker can not create a new job 
 because of symlink problem.
 -

 Key: HADOOP-8274
 URL: https://issues.apache.org/jira/browse/HADOOP-8274
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0, 1.0.0, 1.0.1, 0.22.0
 Environment: windows7+cygwin 1.7.11-1+jdk1.6.0_31+hadoop 1.0.0
Reporter: tim.wu

 The standalone model is ok. But, in pseudo or cluster model, it always throw 
 errors, even I just run wordcount example.
 The HDFS works fine, but tasktracker can not create threads(jvm) for new job. 
  It is empty under /logs/userlogs/job-/attempt-/.
 The reason looks like that in windows, Java can not recognize a symlink of 
 folder as a folder. 
 The detail description is as following,
 ==
 First, the error log of tasktracker is like:
 ==
 12/03/28 14:35:13 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: 
 jvm_201203280212_0005_m_-1386636958
 12/03/28 14:35:13 INFO mapred.JvmManager: JVM Runner 
 jvm_201203280212_0005_m_-1386636958 spawned.
 12/03/28 14:35:17 INFO mapred.JvmManager: JVM Not killed 
 jvm_201203280212_0005_m_-1386636958 but just removed
 12/03/28 14:35:17 INFO mapred.JvmManager: JVM : 
 jvm_201203280212_0005_m_-1386636958 exited with exit code -1. Number of tasks 
 it ran: 0
 12/03/28 14:35:17 WARN mapred.TaskRunner: 
 attempt_201203280212_0005_m_02_0 : Child Error
 java.io.IOException: Task process exit with nonzero status of -1.
 at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
 12/03/28 14:35:21 INFO mapred.TaskTracker: addFreeSlot : current free slots : 
 2
 12/03/28 14:35:24 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): 
 attempt_201203280212_0005_m_02_1 task's state:UNASSIGNED
 12/03/28 14:35:24 INFO mapred.TaskTracker: Trying to launch : 
 attempt_201203280212_0005_m_02_1 which needs 1 slots
 12/03/28 14:35:24 INFO mapred.TaskTracker: In TaskLauncher, current free 
 slots : 2 and trying to launch attempt_201203280212_0005_m_02_1 which 
 needs 1 slots
 12/03/28 14:35:24 WARN mapred.TaskLog: Failed to retrieve stdout log for 
 task: attempt_201203280212_0005_m_02_0
 java.io.FileNotFoundException: 
 D:\cygwin\home\timwu\hadoop-1.0.0\logs\userlogs\job_201203280212_0005\attempt_201203280212_0005_m_02_0\log.index
  (The system cannot find the path specified)
 at java.io.FileInputStream.open(Native Method)
 at java.io.FileInputStream.init(FileInputStream.java:120)
 at 
 org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
 at 
 org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:188)
 at org.apache.hadoop.mapred.TaskLog$Reader.init(TaskLog.java:423)
 at 
 org.apache.hadoop.mapred.TaskLogServlet.printTaskLog(TaskLogServlet.java:81)
 at 
 org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 

[jira] [Commented] (HADOOP-9079) LocalDirAllocator throws ArithmeticException

2013-01-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553208#comment-13553208
 ] 

Ted Yu commented on HADOOP-9079:


{code}
  int numDirs = localDirs.length;
...
long[] availableOnDisk = new long[dirDF.length];
...
while (numDirsSearched  numDirs  returnPath == null) {
...
  if (returnPath == null) {
totalAvailable -= availableOnDisk[dir];
availableOnDisk[dir] = 0; // skip this disk
numDirsSearched++;
  }
{code}
numDirs is derived from localDirs.length but size of availableOnDisk is 
governed by dirDF.length
Should the loop condition be the following instead ?
{code}
while (numDirsSearched  dirDF.length  returnPath == null) {
{code}

 LocalDirAllocator throws ArithmeticException
 

 Key: HADOOP-9079
 URL: https://issues.apache.org/jira/browse/HADOOP-9079
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: trunk-9079.patch


 2012-11-19 22:07:41,709 WARN  [IPC Server handler 0 on 38671] 
 nodemanager.NMAuditLogger(150): USER=UnknownUserIP= 
 OPERATION=Stop Container RequestTARGET=ContainerManagerImpl 
 RESULT=FAILURE  DESCRIPTION=Trying to stop unknown container!   
 APPID=application_1353391620476_0001
 CONTAINERID=container_1353391620476_0001_01_10
 java.lang.ArithmeticException: / by zero
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:368)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:263)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9178:
---

Attachment: HADOOP-9178-2.patch

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553247#comment-13553247
 ] 

Todd Lipcon commented on HADOOP-9097:
-

This seems to be flagging two files as not having licenses:

{quote}
 !? 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg
 !? 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
Lines that start with ? in the release audit report indicate files that do 
not have an Apache license header.
{quote}

(seen in PreCommit-HDFS #3835)

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553252#comment-13553252
 ] 

Sandy Ryza commented on HADOOP-9178:


I uploaded a new patch that addresses the nits.  I also tested on a 
pseudo-distributed cluster from a tarball and was to run jobs and use HDFS 
successfully.

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553256#comment-13553256
 ] 

Todd Lipcon commented on HADOOP-9150:
-

Hey Daryn. How's this patch look to you? Hoping to get this in for 2.0.3 since 
it can cause a big (and unfortunately silent) perf regression for HA on 
clusters with borked DNS.

 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 log.txt, tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9079) LocalDirAllocator throws ArithmeticException

2013-01-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-9079:
---

Attachment: hadoop-9079-v2.txt

Patch v2 revises loop condition using dirDF.length

 LocalDirAllocator throws ArithmeticException
 

 Key: HADOOP-9079
 URL: https://issues.apache.org/jira/browse/HADOOP-9079
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: hadoop-9079-v2.txt, trunk-9079.patch


 2012-11-19 22:07:41,709 WARN  [IPC Server handler 0 on 38671] 
 nodemanager.NMAuditLogger(150): USER=UnknownUserIP= 
 OPERATION=Stop Container RequestTARGET=ContainerManagerImpl 
 RESULT=FAILURE  DESCRIPTION=Trying to stop unknown container!   
 APPID=application_1353391620476_0001
 CONTAINERID=container_1353391620476_0001_01_10
 java.lang.ArithmeticException: / by zero
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:368)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:263)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553281#comment-13553281
 ] 

Eli Collins commented on HADOOP-9178:
-

Thanks Sandy. The security.mrhs.client.protocol.acl config should actually go 
down with the Yarn configs, also security.client.datanode.protocol.acl should 
be indented. +1 otherwise

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9203) RPCCallBenchmark should find a random available port

2013-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553290#comment-13553290
 ] 

Hudson commented on HADOOP-9203:


Integrated in Hadoop-trunk-Commit #3229 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3229/])
HADOOP-9203. RPCCallBenchmark should find a random available port. 
Contributec by Andrew Purtell. (Revision 1433220)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433220
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java


 RPCCallBenchmark should find a random available port
 

 Key: HADOOP-9203
 URL: https://issues.apache.org/jira/browse/HADOOP-9203
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, test
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Purtell
Priority: Trivial
 Attachments: HADOOP-9203.patch, HADOOP-9203.patch


 RPCCallBenchmark insists on port 12345 by default. It should find a random 
 ephemeral range port instead if one isn't specified.
 {noformat}
 testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark)  Time 
 elapsed: 5092 sec   ERROR!
 java.net.BindException: Problem binding to [0.0.0.0:12345] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
   at org.apache.hadoop.ipc.Server.bind(Server.java:361)
   at org.apache.hadoop.ipc.Server$Listener.init(Server.java:459)
   at org.apache.hadoop.ipc.Server.init(Server.java:1877)
   at org.apache.hadoop.ipc.RPC$Server.init(RPC.java:982)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server.init(ProtobufRpcEngine.java:376)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:351)
   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:825)
   at 
 org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
   at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:264)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at 
 org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9203) RPCCallBenchmark should find a random available port

2013-01-14 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9203:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to both trunk and branch-2. 

Thank you Andrew!

 RPCCallBenchmark should find a random available port
 

 Key: HADOOP-9203
 URL: https://issues.apache.org/jira/browse/HADOOP-9203
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, test
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Purtell
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9203.patch, HADOOP-9203.patch


 RPCCallBenchmark insists on port 12345 by default. It should find a random 
 ephemeral range port instead if one isn't specified.
 {noformat}
 testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark)  Time 
 elapsed: 5092 sec   ERROR!
 java.net.BindException: Problem binding to [0.0.0.0:12345] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
   at org.apache.hadoop.ipc.Server.bind(Server.java:361)
   at org.apache.hadoop.ipc.Server$Listener.init(Server.java:459)
   at org.apache.hadoop.ipc.Server.init(Server.java:1877)
   at org.apache.hadoop.ipc.RPC$Server.init(RPC.java:982)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server.init(ProtobufRpcEngine.java:376)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:351)
   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:825)
   at 
 org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
   at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:264)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at 
 org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553292#comment-13553292
 ] 

Sandy Ryza commented on HADOOP-9178:


My understanding was that security.mrhs.client.protocol.acl was an MR, not 
YARN, config?

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9079) LocalDirAllocator throws ArithmeticException

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553296#comment-13553296
 ] 

Hadoop QA commented on HADOOP-9079:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564802/hadoop-9079-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2042//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2042//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2042//console

This message is automatically generated.

 LocalDirAllocator throws ArithmeticException
 

 Key: HADOOP-9079
 URL: https://issues.apache.org/jira/browse/HADOOP-9079
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: hadoop-9079-v2.txt, trunk-9079.patch


 2012-11-19 22:07:41,709 WARN  [IPC Server handler 0 on 38671] 
 nodemanager.NMAuditLogger(150): USER=UnknownUserIP= 
 OPERATION=Stop Container RequestTARGET=ContainerManagerImpl 
 RESULT=FAILURE  DESCRIPTION=Trying to stop unknown container!   
 APPID=application_1353391620476_0001
 CONTAINERID=container_1353391620476_0001_01_10
 java.lang.ArithmeticException: / by zero
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:368)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:263)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553298#comment-13553298
 ] 

Alejandro Abdelnur commented on HADOOP-9207:


A possible way (a bit peculiar) of doing this would be to have the version 
plugin defined only in hadoop-common and have as fileset something like:

{code}
 fileset
   include../../**/src/main/java/**/*.java/include
   include../../**/src/main/proto/**/*.proto/include
 /fileset
{code}

And we should get rid of the YarnVersionInfo.java class and use the 
VersionInfo.java instead.


 version info source checksum does not include all source files
 --

 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth

 The build process takes an MD5 checksum of the source files in Common and 
 YARN.  The HDFS version info command prints the checksum from Common.  The 
 YARN version info command prints the checksum from YARN.  This is incomplete 
 in that the HDFS source code is never included in the checksum, and 2 
 different YARN builds with the same YARN code but different Common code would 
 have the same checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553298#comment-13553298
 ] 

Alejandro Abdelnur edited comment on HADOOP-9207 at 1/15/13 12:05 AM:
--

A possible way (a bit peculiar) of doing this would be to have the version 
plugin defined only in hadoop-common and have as fileset something like:

{code}
 includes
   include../../**/src/main/java/**/*.java/include
   include../../**/src/main/proto/**/*.proto/include
 /includes
{code}

And we should get rid of the YarnVersionInfo.java class and use the 
VersionInfo.java instead.


  was (Author: tucu00):
A possible way (a bit peculiar) of doing this would be to have the version 
plugin defined only in hadoop-common and have as fileset something like:

{code}
 fileset
   include../../**/src/main/java/**/*.java/include
   include../../**/src/main/proto/**/*.proto/include
 /fileset
{code}

And we should get rid of the YarnVersionInfo.java class and use the 
VersionInfo.java instead.

  
 version info source checksum does not include all source files
 --

 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth

 The build process takes an MD5 checksum of the source files in Common and 
 YARN.  The HDFS version info command prints the checksum from Common.  The 
 YARN version info command prints the checksum from YARN.  This is incomplete 
 in that the HDFS source code is never included in the checksum, and 2 
 different YARN builds with the same YARN code but different Common code would 
 have the same checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9208) Fix release audit warnings

2013-01-14 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-9208:
--

 Summary: Fix release audit warnings
 Key: HADOOP-9208
 URL: https://issues.apache.org/jira/browse/HADOOP-9208
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


The following files should be excluded from rat check:

./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.odg
./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553298#comment-13553298
 ] 

Alejandro Abdelnur edited comment on HADOOP-9207 at 1/15/13 12:06 AM:
--

A possible way (a bit peculiar) of doing this would be to have the version 
plugin defined only in hadoop-common and have as fileset something like:

{code}
configuration
  source
directory${basedir}/../../directory
includes
  include**/src/main/java/**/*.java/include
  include**/src/main/proto/**/*.proto/include
/includes
  /source
/configuration
{code}

And we should get rid of the YarnVersionInfo.java class and use the 
VersionInfo.java instead.

  was (Author: tucu00):
A possible way (a bit peculiar) of doing this would be to have the version 
plugin defined only in hadoop-common and have as fileset something like:

{code}
 includes
   include../../**/src/main/java/**/*.java/include
   include../../**/src/main/proto/**/*.proto/include
 /includes
{code}

And we should get rid of the YarnVersionInfo.java class and use the 
VersionInfo.java instead.

  
 version info source checksum does not include all source files
 --

 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth

 The build process takes an MD5 checksum of the source files in Common and 
 YARN.  The HDFS version info command prints the checksum from Common.  The 
 YARN version info command prints the checksum from YARN.  This is incomplete 
 in that the HDFS source code is never included in the checksum, and 2 
 different YARN builds with the same YARN code but different Common code would 
 have the same checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553303#comment-13553303
 ] 

Alejandro Abdelnur commented on HADOOP-8924:


Chris, latest patch LGTM, but again, somebody else should review it as I've 
written part of it. Thx. Also, I've just commented in HADOOP-9207 how we could 
do the checksum for ALL source.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553304#comment-13553304
 ] 

Alejandro Abdelnur commented on HADOOP-8924:


Finally, I think we should have an additional goal in print-version-info that 
prints the computed MD5, this would help somebody to easily obtain the checksum 
of a source and verify a build.

 Hadoop Common creating package-info.java must not depend on sh, at least for 
 Windows
 

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Alejandro Abdelnur
 Fix For: trunk-win

 Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
 HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
 HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
 HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
 HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553365#comment-13553365
 ] 

Hadoop QA commented on HADOOP-9178:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564800/HADOOP-9178-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2041//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2041//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2041//console

This message is automatically generated.

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553394#comment-13553394
 ] 

Todd Lipcon commented on HADOOP-9070:
-

I missed this when it went in, since the original description didn't mention 
that this would change the wire format. Per my comments elsewhere, I don't 
think we can afford to break wire compatibility in 2.0.3. I'd like to revert 
this from branch-2, but also don't want to regress the bug. Daryn, did you have 
an idea on how to do this compatibly?

 Kerberos SASL server cannot find kerberos key
 -

 Key: HADOOP-9070
 URL: https://issues.apache.org/jira/browse/HADOOP-9070
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch


 HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
 the sasl server which renders a server incapable of accepting kerberized 
 connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-9209:
---

 Summary: Add shell command to dump file checksums
 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Occasionally while working with tools like distcp, or debugging certain issues, 
it's useful to be able to quickly see the checksum of a file. We currently have 
the APIs to efficiently calculate a checksum, but we don't expose it to users. 
This JIRA is to add a fs -checksum command which dumps the checksum 
information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9209:


Issue Type: New Feature  (was: Bug)

 Add shell command to dump file checksums
 

 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon

 Occasionally while working with tools like distcp, or debugging certain 
 issues, it's useful to be able to quickly see the checksum of a file. We 
 currently have the APIs to efficiently calculate a checksum, but we don't 
 expose it to users. This JIRA is to add a fs -checksum command which dumps 
 the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-14 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9070:


Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

 Kerberos SASL server cannot find kerberos key
 -

 Key: HADOOP-9070
 URL: https://issues.apache.org/jira/browse/HADOOP-9070
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch


 HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
 the sasl server which renders a server incapable of accepting kerberized 
 connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9209:


Attachment: hadoop-9209.txt

Attached patch implements the new shell command.

In addition to the unit test, I tested manually:

{code}
$ ./bin/hadoop fs -checksum '/*'
/file1  MD5-of-0MD5-of-512CRC32C
0200b234aa05a75fed38536bda657b20bfcf
/file1-crc32MD5-of-0MD5-of-512CRC32 
0200593b23e67a7477aab90e42e41478b321
/file1-crc32-copy   MD5-of-0MD5-of-512CRC32 
0200593b23e67a7477aab90e42e41478b321

$ ./bin/hadoop fs -help checksum
-checksum src ...:Dump checksum information for files that match the file
pattern src to stdout. Note that this requires a round-trip
to the datanode storing each block of the file, and thus is not
efficient to run on a large number of files.
{code}

 Add shell command to dump file checksums
 

 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-9209.txt


 Occasionally while working with tools like distcp, or debugging certain 
 issues, it's useful to be able to quickly see the checksum of a file. We 
 currently have the APIs to efficiently calculate a checksum, but we don't 
 expose it to users. This JIRA is to add a fs -checksum command which dumps 
 the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9209:


Status: Patch Available  (was: Open)

 Add shell command to dump file checksums
 

 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-9209.txt


 Occasionally while working with tools like distcp, or debugging certain 
 issues, it's useful to be able to quickly see the checksum of a file. We 
 currently have the APIs to efficiently calculate a checksum, but we don't 
 expose it to users. This JIRA is to add a fs -checksum command which dumps 
 the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9210) bad mirror in download list

2013-01-14 Thread Andy Isaacson (JIRA)
Andy Isaacson created HADOOP-9210:
-

 Summary: bad mirror in download list
 Key: HADOOP-9210
 URL: https://issues.apache.org/jira/browse/HADOOP-9210
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Andy Isaacson
Priority: Minor


The http://hadoop.apache.org/releases.html page links to 
http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
mirrors.  The first one on the list (for me) is 
http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.

I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553451#comment-13553451
 ] 

Eli Collins commented on HADOOP-9178:
-

Sandy, you're right, my bad. +1  I'll this but with that config back where you 
had it

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553451#comment-13553451
 ] 

Eli Collins edited comment on HADOOP-9178 at 1/15/13 3:15 AM:
--

Sandy, you're right, my bad. +1  I'll commit this but with that config back 
where you had it.

  was (Author: eli):
Sandy, you're right, my bad. +1  I'll this but with that config back where 
you had it
  
 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-9178:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merge to branch-2. Thanks Sandy!

 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553461#comment-13553461
 ] 

Hudson commented on HADOOP-9178:


Integrated in Hadoop-trunk-Commit #3233 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3233/])
HADOOP-9178. src/main/conf is missing hadoop-policy.xml. Contributed by 
Sandy Ryza (Revision 1433275)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433275
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HDFSPolicyProvider.java


 src/main/conf is missing hadoop-policy.xml
 --

 Key: HADOOP-9178
 URL: https://issues.apache.org/jira/browse/HADOOP-9178
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
 HADOOP-9178-2.patch, HADOOP-9178.patch


 src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
 hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE

2013-01-14 Thread Sarah Weissman (JIRA)
Sarah Weissman created HADOOP-9211:
--

 Summary: HADOOP_CLIENT_OPTS default setting fixes max heap size at 
128m, disregards HADOOP_HEAPSIZE
 Key: HADOOP-9211
 URL: https://issues.apache.org/jira/browse/HADOOP-9211
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.2-alpha
Reporter: Sarah Weissman


hadoop-env.sh as included in the 2.0.2alpha release tarball contains:
export HADOOP_CLIENT_OPTS=-Xmx128m $HADOOP_CLIENT_OPTS

This overrides any heap settings in HADOOP_HEAPSIZE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553470#comment-13553470
 ] 

Hadoop QA commented on HADOOP-9209:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564852/hadoop-9209.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.cli.TestCLI

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2043//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2043//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2043//console

This message is automatically generated.

 Add shell command to dump file checksums
 

 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-9209.txt


 Occasionally while working with tools like distcp, or debugging certain 
 issues, it's useful to be able to quickly see the checksum of a file. We 
 currently have the APIs to efficiently calculate a checksum, but we don't 
 expose it to users. This JIRA is to add a fs -checksum command which dumps 
 the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9210) bad mirror in download list

2013-01-14 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553480#comment-13553480
 ] 

Harsh J commented on HADOOP-9210:
-

This is more of an INFRA ticket; Even 
http://www.alliedquotes.com/mirrors/apache/ does not list anything so that 
mirror is currently dead. We do not control the mirrors list ourselves 
downstream (i.e. here at Apache Hadoop).

 bad mirror in download list
 ---

 Key: HADOOP-9210
 URL: https://issues.apache.org/jira/browse/HADOOP-9210
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Andy Isaacson
Priority: Minor

 The http://hadoop.apache.org/releases.html page links to 
 http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
 mirrors.  The first one on the list (for me) is 
 http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.
 I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE

2013-01-14 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553496#comment-13553496
 ] 

Harsh J commented on HADOOP-9211:
-

Compared to past, the HADOOP_HEAPSIZE is only for services, going forward. The 
HADOOP_CLIENT_OPTS therefore applies to clients alone and is the config point 
for client-side heap.

If an easier, numeric setting is needed for clients, we can perhaps have 
HADOOP_CLIENT_HEAPSIZE, but HADOOP_HEAPSIZE should not generally apply to 
clients given the newer HADOOP_CLIENT_OPTS.

 HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards 
 HADOOP_HEAPSIZE
 --

 Key: HADOOP-9211
 URL: https://issues.apache.org/jira/browse/HADOOP-9211
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.2-alpha
Reporter: Sarah Weissman
   Original Estimate: 1m
  Remaining Estimate: 1m

 hadoop-env.sh as included in the 2.0.2alpha release tarball contains:
 export HADOOP_CLIENT_OPTS=-Xmx128m $HADOOP_CLIENT_OPTS
 This overrides any heap settings in HADOOP_HEAPSIZE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9209:


Attachment: hadoop-9209.txt

Oops, had a bad comparator in the TestCLI config. New patch just fixes the test.

 Add shell command to dump file checksums
 

 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-9209.txt, hadoop-9209.txt


 Occasionally while working with tools like distcp, or debugging certain 
 issues, it's useful to be able to quickly see the checksum of a file. We 
 currently have the APIs to efficiently calculate a checksum, but we don't 
 expose it to users. This JIRA is to add a fs -checksum command which dumps 
 the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553569#comment-13553569
 ] 

Hadoop QA commented on HADOOP-9209:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564876/hadoop-9209.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2044//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2044//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2044//console

This message is automatically generated.

 Add shell command to dump file checksums
 

 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-9209.txt, hadoop-9209.txt


 Occasionally while working with tools like distcp, or debugging certain 
 issues, it's useful to be able to quickly see the checksum of a file. We 
 currently have the APIs to efficiently calculate a checksum, but we don't 
 expose it to users. This JIRA is to add a fs -checksum command which dumps 
 the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira