See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2967/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 31818 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.277 sec - in 
org.apache.hadoop.mapreduce.TestNewCombinerGrouping
Running org.apache.hadoop.mapreduce.TestMRJobClient
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.394 sec - 
in org.apache.hadoop.mapreduce.TestMRJobClient
Running org.apache.hadoop.mapreduce.TestMapCollection
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.789 sec - 
in org.apache.hadoop.mapreduce.TestMapCollection
Running org.apache.hadoop.conf.TestNoDefaultsJobConf
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.023 sec - in 
org.apache.hadoop.conf.TestNoDefaultsJobConf
Running org.apache.hadoop.util.TestMRCJCReflectionUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec - in 
org.apache.hadoop.util.TestMRCJCReflectionUtils
Running org.apache.hadoop.util.TestMRCJCRunJar
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec - in 
org.apache.hadoop.util.TestMRCJCRunJar

Results :

Failed tests: 
  TestNetworkedJob.testNetworkedJob:174 expected:<[[Wed Feb 17 04:23:03 +0000 
2016] Application is Activated, waiting for resources to be assigned for AM.  
Details : AM Partition = <DEFAULT_PARTITION> ; Partition Resource = 
<memory:8192, vCores:16> ; Queue's Absolute capacity = 100.0 % ; Queue's 
Absolute used capacity = 0.0 % ; Queue's Absolute max capacity = 100.0 % ; ]> 
but was:<[]>

Tests in error: 
  TestMRCredentials.setUp:62 ยป NoClassDefFound 
org/apache/hadoop/util/Daemon$Dae...

Tests run: 525, Failures: 1, Errors: 1, Skipped: 11

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop MapReduce Client .................... SUCCESS [  3.011 s]
[INFO] Apache Hadoop MapReduce Core ...................... SUCCESS [01:47 min]
[INFO] Apache Hadoop MapReduce Common .................... SUCCESS [ 27.633 s]
[INFO] Apache Hadoop MapReduce Shuffle ................... SUCCESS [  6.033 s]
[INFO] Apache Hadoop MapReduce App ....................... SUCCESS [09:15 min]
[INFO] Apache Hadoop MapReduce HistoryServer ............. SUCCESS [05:37 min]
[INFO] Apache Hadoop MapReduce JobClient ................. FAILURE [  02:11 h]
[INFO] Apache Hadoop MapReduce HistoryServer Plugins ..... SKIPPED
[INFO] Apache Hadoop MapReduce NativeTask ................ SKIPPED
[INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
[INFO] Apache Hadoop MapReduce ........................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:28 h
[INFO] Finished at: 2016-02-17T06:26:18+00:00
[INFO] Final Memory: 37M/600M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-mapreduce-client-jobclient: There was a timeout or other error 
in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-mapreduce-client-jobclient
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) 
##############################
2 tests failed.
FAILED:  
org.apache.hadoop.mapreduce.security.TestMRCredentials.org.apache.hadoop.mapreduce.security.TestMRCredentials

Error Message:
org/apache/hadoop/util/Daemon$DaemonFactory

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/Daemon$DaemonFactory
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
        at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker.initializeStripedReadThreadPool(ErasureCodingWorker.java:129)
        at 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker.<init>(ErasureCodingWorker.java:110)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1278)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:479)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2551)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2439)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.mapreduce.security.TestMRCredentials.setUp(TestMRCredentials.java:62)


FAILED:  org.apache.hadoop.mapred.TestNetworkedJob.testNetworkedJob

Error Message:
expected:<[[Wed Feb 17 04:23:03 +0000 2016] Application is Activated, waiting 
for resources to be assigned for AM.  Details : AM Partition = 
<DEFAULT_PARTITION> ; Partition Resource = <memory:8192, vCores:16> ; Queue's 
Absolute capacity = 100.0 % ; Queue's Absolute used capacity = 0.0 % ; Queue's 
Absolute max capacity = 100.0 % ; ]> but was:<[]>

Stack Trace:
org.junit.ComparisonFailure: expected:<[[Wed Feb 17 04:23:03 +0000 2016] 
Application is Activated, waiting for resources to be assigned for AM.  Details 
: AM Partition = <DEFAULT_PARTITION> ; Partition Resource = <memory:8192, 
vCores:16> ; Queue's Absolute capacity = 100.0 % ; Queue's Absolute used 
capacity = 0.0 % ; Queue's Absolute max capacity = 100.0 % ; ]> but was:<[]>
        at org.junit.Assert.assertEquals(Assert.java:115)
        at org.junit.Assert.assertEquals(Assert.java:144)
        at 
org.apache.hadoop.mapred.TestNetworkedJob.testNetworkedJob(TestNetworkedJob.java:174)


Reply via email to