Build failed in Jenkins: Hadoop-Common-trunk #918

2013-10-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/918/changes

Changes:

[llu] Move HDFS-5276 to 2.3.0 in CHANGES.txt

[llu] HDFS-5276. Remove volatile from LightWeightHashSet. (Junping Du via llu)

[llu] YARN-7. Support CPU resource for DistributedShell. (Junping Du via llu)

[jing9] HADOOP-10039. Add Hive to the list of projects using 
AbstractDelegationTokenSecretManager. Contributed by Haohui Mai.

[suresh] HDFS-5335. Hive query failed with possible race in dfs output stream. 
Contributed by Haohui Mai.

[sandy] YARN-1265. Fair Scheduler chokes on unhealthy node reconnect (Sandy 
Ryza)

[suresh] HADOOP-10029. Specifying har file to MR job fails in secure cluster. 
Contributed by Suresh Srinivas.

[devaraj] YARN-879. Fixed tests w.r.t 
o.a.h.y.server.resourcemanager.Application. Contributed by Junping Du.

--
[...truncated 57619 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: build.platform - Linux-i386-32
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 

[jira] [Created] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Dieter De Witte (JIRA)
Dieter De Witte created HADOOP-10042:


 Summary: Heap space error during copy from maptask to reduce task
 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1


http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase

I've described the problem on stackoverflow as well. It contains a link to 
another JIRA: 
http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html

My errors are completely the same: out of memory error when 
mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I put 
it to 0.2, does this mean the original JIRA was not resolved?

Does anybody have an idea whether this is a mapreduce issue or is it a 
misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10043) Convert org.apache.hadoop.security.token.SecretManager to be an AbstractService

2013-10-11 Thread Tsuyoshi OZAWA (JIRA)
Tsuyoshi OZAWA created HADOOP-10043:
---

 Summary: Convert org.apache.hadoop.security.token.SecretManager to 
be an AbstractService
 Key: HADOOP-10043
 URL: https://issues.apache.org/jira/browse/HADOOP-10043
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA


I'm dealing with YARN-1172, a subtask of YARN-1139(ResourceManager HA related 
task). The sentence as follows is a quoted from YARN-1172's my comment:
{quote}
I've found that it requires org.apache.hadoop.security.token.SecretManager to 
be an AbstractService,
because both AbstractService and org.apache.hadoop.security.token.SecretManager 
are abstract class and we cannot extend both of them at the same time.
{quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-10042.
--

Resolution: Invalid

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10044) Improve the javadoc of rpc code

2013-10-11 Thread Sanjay Radia (JIRA)
Sanjay Radia created HADOOP-10044:
-

 Summary: Improve the javadoc of rpc code
 Key: HADOOP-10044
 URL: https://issues.apache.org/jira/browse/HADOOP-10044
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-10040.


  Resolution: Fixed
   Fix Version/s: 2.2.1
  3.0.0
Target Version/s: 3.0.0, 2.2.1

I have applied the line ending changes in trunk, branch-2, and branch-2.2.  
[~yingdachen], thank you for the bug report.

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.2.1


 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (HADOOP-10040) hadoop.cmd in UNIX format and would not run by default on Windows

2013-10-11 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu reopened HADOOP-10040:
--


Woah, this completely mess up git.

Short answer: you should svn propset windows file as eol-style *native*

Long answer: in order for .gitattributes to work correctly with eol attributes, 
all text file with eol attributes are stored as with LF in the repository and 
converted to the value of eol upon checkout. This is not compatible with svn 
eol-style CRLF, which change the content in the repository as well. With svn 
eol-style native, an svn checkout will convert normalized text files (stored 
with LF) to CRLF.

I committed a workaround (to trunk and branch-2, so people can work with git) 
with .gitattributes for windows file as binary, so git won't touch them.

 hadoop.cmd in UNIX format and would not run by default on Windows
 -

 Key: HADOOP-10040
 URL: https://issues.apache.org/jira/browse/HADOOP-10040
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yingda Chen
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.2.1


 The hadoop.cmd currently checked in into hadoop-common is in UNIX format, 
 same as most of other src files. However, the hadoop.cmd is meant to be used 
 on Windows only, the fact that it is in UNIX format makes it unrunnable as is 
 on Window platform.
 An exception shall be made on hadoop.cmd (and other cmd files for what 
 matters) to make sure they are in DOS format, for them to be runnable as is 
 when checked out from source repository.



--
This message was sent by Atlassian JIRA
(v6.1#6144)