[
https://issues.apache.org/jira/browse/MAPREDUCE-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13867535#comment-13867535
]
Tsz Wo (Nicholas), SZE commented on MAPREDUCE-5715:
---------------------------------------------------
It looks like that the exception was from parsing the utime below.
{code}
// Set (name) (ppid) (pgrpId) (session) (utime) (stime) (vsize) (rss)
pinfo.updateProcessInfo(m.group(2), m.group(3),
Integer.parseInt(m.group(4)), Integer.parseInt(m.group(5)),
Long.parseLong(m.group(7)), new BigInteger(m.group(8)),
Long.parseLong(m.group(10)), Long.parseLong(m.group(11)));
{code}
> ProcfsBasedProcessTree#constructProcessInfo() can still throw
> NumberFormatException
> -----------------------------------------------------------------------------------
>
> Key: MAPREDUCE-5715
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5715
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: trunk, 2.2.0
> Environment: Ubuntu 13.04 (OS Kernel 3.9.0), Armv71Exynos5440
> Reporter: German Florez-Larrahondo
> Priority: Minor
> Attachments: constructprocessfailing.jpg
>
>
> For long running jobs I have hit an issue that seems to be to be similar to
> the bug reported in https://issues.apache.org/jira/browse/MAPREDUCE-3583
> Unfortunately I do not have the OS logs for this issue, but the utime for the
> application was read by Hadoop as "184467440737095551615" which does not fit
> into a Long. In MAPREDUCE-3583 a change was made to
> ProcfsBasedProcessTree.java
> in order to support larger values for stime. Perhaps we need to support
> larger values for utime (although this could increase the complexity of the
> math that is being performed on those numbers)
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)