[ 
https://issues.apache.org/jira/browse/HADOOP-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12720484#action_12720484
 ] 

Jothi Padmanabhan commented on HADOOP-6029:
-------------------------------------------

Here are a couple of links that explain -Xmx value and maxMemory() will be 
different.

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4391499
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4686462

A comment from the second link -- 

" ... freeMemory() and totalMemory()
report the amount of memory _inside_ the jvm while
maxMemory() reports on the amount of memory _outside_ the
jvm, i.e. the amount the whole jvm uses as seen from the OS."






> TestReduceFetch failed.
> -----------------------
>
>                 Key: HADOOP-6029
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6029
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Tsz Wo (Nicholas), SZE
>         Attachments: 
> FAILING-PARTIALMEM-TEST-org.apache.hadoop.mapred.TestReduceFetch.txt, 
> TEST-org.apache.hadoop.mapred.TestReduceFetch.txt, 
> TEST-org.apache.hadoop.mapred.TestReduceFetch.txt
>
>
> {noformat}
> Testcase: testReduceFromMem took 23.625 sec
>       FAILED
> Non-zero read from local: 83
> junit.framework.AssertionFailedError: Non-zero read from local: 83
>       at 
> org.apache.hadoop.mapred.TestReduceFetch.testReduceFromMem(TestReduceFetch.java:289)
>       at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>       at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>       at junit.extensions.TestSetup.run(TestSetup.java:27)
> {noformat}
> Ran TestReduceFetch a few times on a clean trunk.  It failed consistently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to