Saw the same error on https://issues.apache.org/jira/browse/HADOOP-4302

junit.framework.AssertionFailedError
        at 
org.apache.hadoop.mapred.TestReduceFetch.testReduceFromPartialMem(TestReduceFetch.java:119)
        at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
        at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
        at junit.extensions.TestSetup.run(TestSetup.java:23) 


Chris, can you shed some light on what this test does, and whether it matters 
if this test fails?


============================ 
[zshao branch-0.20] svn annotate 
./src/test/org/apache/hadoop/mapred/TestReduceFetch.java
694459   cdouglas   public void testReduceFromPartialMem() throws Exception {
694459   cdouglas     JobConf job = mrCluster.createJobConf();
700918   cdouglas     job.setNumMapTasks(5);
700918   cdouglas     job.setInt("mapred.inmem.merge.threshold", 0);
694459   cdouglas     job.set("mapred.job.reduce.input.buffer.percent", "1.0");
700918   cdouglas     job.setInt("mapred.reduce.parallel.copies", 1);
700918   cdouglas     job.setInt("io.sort.mb", 10);
700918   cdouglas     job.set("mapred.child.java.opts", "-Xmx128m");
700918   cdouglas     job.set("mapred.job.shuffle.input.buffer.percent", 
"0.14");
700918   cdouglas     job.setNumTasksToExecutePerJvm(1);
700918   cdouglas     job.set("mapred.job.shuffle.merge.percent", "1.0");
694459   cdouglas     Counters c = runJob(job);
718229       ddas     final long hdfsWritten = 
c.findCounter(Task.FILESYSTEM_COUNTER_GROUP,
718229       ddas         
Task.getFileSystemCounterNames("hdfs")[1]).getCounter();
718229       ddas     final long localRead = 
c.findCounter(Task.FILESYSTEM_COUNTER_GROUP,
718229       ddas         
Task.getFileSystemCounterNames("file")[0]).getCounter();
700918   cdouglas     assertTrue("Expected at least 1MB fewer bytes read from 
local (" +
700918   cdouglas         localRead + ") than written to HDFS (" + hdfsWritten 
+ ")",
700918   cdouglas         hdfsWritten >= localRead + 1024 * 1024);
694459   cdouglas   }


[zshao branch-0.20] svn log 
./src/test/org/apache/hadoop/mapred/TestReduceFetch.java
...
------------------------------------------------------------------------
r718229 | ddas | 2008-11-17 04:23:15 -0800 (Mon, 17 Nov 2008) | 1 line

HADOOP-4188. Removes task's dependency on concrete filesystems. Contributed by 
Sharad Agarwal.
------------------------------------------------------------------------
r700918 | cdouglas | 2008-10-01 13:57:36 -0700 (Wed, 01 Oct 2008) | 3 lines

HADOOP-4302. Fix a race condition in TestReduceFetch that can yield false
negatvies.

------------------------------------------------------------------------
r696640 | ddas | 2008-09-18 04:47:59 -0700 (Thu, 18 Sep 2008) | 1 line

HADOOP-3829. Narrown down skipped records based on user acceptable value. 
Contributed by Sharad Agarwal.
------------------------------------------------------------------------
r694459 | cdouglas | 2008-09-11 13:26:11 -0700 (Thu, 11 Sep 2008) | 5 lines

HADOOP-3446. Keep map outputs in memory during the reduce. Remove
fs.inmemory.size.mb and replace with properties defining in memory map
output retention during the shuffle and reduce relative to maximum heap
usage.

------------------------------------------------------------------------

============================


Zheng
-----Original Message-----
From: Rama Ramasamy [mailto:[email protected]] 
Sent: Tuesday, August 04, 2009 5:19 PM
To: [email protected]
Subject: TestReduceFetch fails on 64 bit java on branch 0.20 and y! hadoop 
0.20.1


With JAVA_HOME set to 64 bit version of jvm,   "ant  test 
-Dtestcase=TestReduceFetch" fails with error message

--
junit.framework.AssertionFailedError: Expected at least 1MB fewer bytes read 
from local (21135390) than written to HDFS (21012480)
        at 
org.apache.hadoop.mapred.TestReduceFetch.testReduceFromPartialMem(TestReduceFetch.java:133)
...



The same test succeeds with 32 bit jvm.  java version "1.6.0_07" is used on 
both the cases.

This error is reproducible in both hadoop0.20 and hadoop0.20.1 (yahoo's release)

Anybody seen this issue?

Thanks
Rama

Reply via email to