[
https://issues.apache.org/jira/browse/FLUME-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13402819#comment-13402819
]
Juhani Connolly commented on FLUME-1326:
----------------------------------------
Set up gc logging and got dumps of a fresh run running instance and one at oom,
I just realized I can't really post them because they contain internal server
logs, sorry -_-
As far as the content is concerned though, it looks like about 5mb is retained
by MemoryChannel, virtually all of that in the msg queue. The rest seems to be
spread out, but most buffers seem to be attached to various Buffers and their
owning streams
It's possible that we're just legitimately running out of memory as various
buffers happen to get full up, and with a default sized memory channel this
wouldn't happen, I'm going to try running for a while on 40 mb without changing
other stuff, and keep an eye on the gc logs, see what happens
> OutOfMemoryError in HDFSSink
> ----------------------------
>
> Key: FLUME-1326
> URL: https://issues.apache.org/jira/browse/FLUME-1326
> Project: Flume
> Issue Type: Bug
> Affects Versions: v1.2.0
> Reporter: Juhani Connolly
> Priority: Critical
>
> We run a 3 node/1 collector test cluster pushing about 350events/sec per
> node... Not really high stress, but just something to evaluate flume with.
> Consistently our collector has been dying because of an OOMError killing the
> SinkRunner after running for about 30-40 hours(seems pretty consistent as
> we've had it 3 times now).
> Suspected cause would be a memory leak somewhere in HdfsSink. The feeder
> nodes which run AvroSink instead of HdfsSink have been up and running for
> about a week without restarts.
> flume-load/act-wap02/2012-06-26-17.1340697637324.tmp, packetSize=65557,
> chunksPerPacket=127, bytesCurBlock=29731328
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> 2012-06-26 17:12:56,080 (SinkRunner-PollingRunner-DefaultSinkProcessor)
> [ERROR -
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:411)]
> process failed
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at java.util.Arrays.copyOfRange(Arrays.java:3209)
> at java.lang.String.<init>(String.java:215)
> at java.lang.StringBuilder.toString(StringBuilder.java:430)
> at
> org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:306)
> at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:367)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:619)
> Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor"
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at java.util.Arrays.copyOfRange(Arrays.java:3209)
> at java.lang.String.<init>(String.java:215)
> at java.lang.StringBuilder.toString(StringBuilder.java:430)
> at
> org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:306)
> at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:367)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:619)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira