[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544343#comment-14544343
 ] 

Ray Chiang commented on MAPREDUCE-6222:
---------------------------------------

That's fair enough.  I'm not a UI guy (to put it generously), so the best I 
could think of was to put up a line like:

    Table contains tasks 1 through 6000 out of 404000

before the table.

The new tasks URL dynamically determines the pagination amount and which page 
to display, so more advanced users can certainly grab larger chunks for their 
searches if needed.

I'm open to any suggestions for making this cleaner/better/more clear.

> HistoryServer Hangs Processing Large Jobs
> -----------------------------------------
>
>                 Key: MAPREDUCE-6222
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6222
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Andrew Johnson
>            Assignee: Ray Chiang
>              Labels: BB2015-05-TBR
>         Attachments: JHS New Display Top.png, JHS Original Display Top.png, 
> MAPREDUCE-6222.001.patch, MAPREDUCE-6222.002.patch, MAPREDUCE-6222.003.patch, 
> MAPREDUCE-6222.005.patch, MAPREDUCE-6222.006.patch, MAPREDUCE-6222.007.patch, 
> MAPREDUCE-6222.008.patch, head.jhist, historyserver_jstack.txt
>
>
> I'm encountering an issue with the Mapreduce HistoryServer processing the 
> history files for large jobs.  This has come up several times with for jobs 
> with around 60000 total tasks.  When the HistoryServer loads the .jhist file 
> from HDFS for a job of that size (which is usually around 500 Mb), the 
> HistoryServer's CPU usage spiked and the UI became unresponsive.  After about 
> 10 minutes I restarted the HistoryServer and it was behaving normally again.
> The cluster is running CDH 5.3 (2.5.0-cdh5.3.0).  I've attached the output of 
> jstack from a time this was occurring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to