[ 
https://issues.apache.org/jira/browse/STORM-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15008018#comment-15008018
 ] 

Zhuo Liu commented on STORM-1206:
---------------------------------

For default "gc.log", this issue will not happen since at most 10 gc files will 
be generated for a worker (named as gc.log.0.current, gc.log.1, gc.log.2 
....... gc.log.9). 
This happens to Yahoo because we set gc as "-Xloggc:artifacts/gc.%p.log"

> Reduce logviewer memory usage
> -----------------------------
>
>                 Key: STORM-1206
>                 URL: https://issues.apache.org/jira/browse/STORM-1206
>             Project: Apache Storm
>          Issue Type: Improvement
>          Components: storm-core
>            Reporter: Zhuo Liu
>            Assignee: Zhuo Liu
>
> In production, we ran into an issue with logviewers bouncing with out of 
> memory errors. Note that this happens very rarely, we met this in some 
> extreme case when super frequently restarting of workers generates a huge 
> number of gc files (~1M files).
> What was happening is that if there are lots of log files (~1 M files) for a 
> particular headless user, we would have so many strings resident in memory 
> that logviewer would run out of heap space.
> We were able to work around this by increasing the heap space, but we should 
> consider putting some sort of an upper bound on the number of files so that 
> we don't run in to this issue, even with the bigger heap.
> Using the java DirectoryStream can avoid holding all file names in memory 
> during file listing. Also, a multi-round directory cleaner can be introduced 
> to delete files while disk quota is exceeded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to