[ 
https://issues.apache.org/jira/browse/FLUME-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13407986#comment-13407986
 ] 

thinker0 commented on FLUME-1326:
---------------------------------

{code}

2012-07-06T22:27:00.413+0900: [GC [1 CMS-initial-mark: 136575K(136576K)] 
198009K(198016K), 0.0350390 secs] [Times: user=0.03 sys=0.00, real=0.04 secs] 
2012-07-06T22:27:00.449+0900: [Full GC [CMS2012-07-06T22:27:00.608+0900: 
[CMS-concurrent-mark: 0.159/0.160 secs] [Times: user=0.16 sys=0.00, real=0.16 
secs] 
 (concurrent mode failure): 136575K->136575K(136576K), 0.5808260 secs] 
198014K->197895K(198016K), [CMS Perm : 20375K->20375K(34192K)] icms_dc=100 , 
0.5809580 secs] [Times: user=0.58 sys=0.00, real=0.58 secs] 
2012-07-06T22:27:01.030+0900: [GC [1 CMS-initial-mark: 136575K(136576K)] 
197897K(198016K), 0.0354160 secs] [Times: user=0.04 sys=0.00, real=0.03 secs] 
Heap
 par new generation   total 61440K, used 61435K [0x00000000ee600000, 
0x00000000f28a0000, 0x00000000f28a0000)
  eden space 54656K, 100% used [0x00000000ee600000, 0x00000000f1b60000, 
0x00000000f1b60000)
  from space 6784K,  99% used [0x00000000f2200000, 0x00000000f289efa8, 
0x00000000f28a0000)
  to   space 6784K,   0% used [0x00000000f1b60000, 0x00000000f1b60000, 
0x00000000f2200000)
 concurrent mark-sweep generation total 136576K, used 136575K 
[0x00000000f28a0000, 0x00000000fae00000, 0x00000000fae00000)
 concurrent-mark-sweep perm gen total 34192K, used 20375K [0x00000000fae00000, 
0x00000000fcf64000, 0x0000000100000000)

2012-07-06T22:27:01.069+0900: [Full GC [CMS2012-07-06T22:27:01.226+0900: 
[CMS-concurrent-mark: 0.159/0.160 secs] [Times: user=0.16 sys=0.00, real=0.17 
secs] 
 (concurrent mode failure): 136575K->136575K(136576K), 0.5796240 secs] 
198013K->197886K(198016K), [CMS Perm : 20375K->20375K(34192K)] icms_dc=100 , 
0.5797590 secs] [Times: user=0.58 sys=0.00, real=0.58 secs] 
2012-07-06T22:27:01.650+0900: [GC [1 CMS-initial-mark: 136575K(136576K)] 
197967K(198016K), 0.0351150 secs] [Times: user=0.04 sys=0.00, real=0.03 secs] 
2012-07-06T22:27:01.685+0900: [Full GC [CMS2012-07-06T22:27:01.845+0900: 
[CMS-concurrent-mark: 0.159/0.160 secs] [Times: user=0.16 sys=0.00, real=0.16 
secs] 
 (concurrent mode failure): 136575K->136575K(136576K), 0.6167080 secs] 
198008K->197894K(198016K), [CMS Perm : 20375K->20375K(34192K)] icms_dc=100 , 
0.6170540 secs] [Times: user=0.62 sys=0.00, real=0.62 secs] 
2012-07-06T22:27:02.303+0900: [GC [1 CMS-initial-mark: 136575K(136576K)] 
197918K(198016K), 0.0352080 secs] [Times: user=0.03 sys=0.00, real=0.04 secs] 
2012-07-06T22:27:02.339+0900: [Full GC [CMS2012-07-06T22:27:02.498+0900: 
[CMS-concurrent-mark: 0.159/0.160 secs] [Times: user=0.16 sys=0.00, real=0.16 
secs] 
 (concurrent mode failure): 136575K->136575K(136576K), 0.5811630 secs] 
198014K->197913K(198016K), [CMS Perm : 20375K->20375K(34192K)] icms_dc=100 , 
0.5812980 secs] [Times: user=0.58 sys=0.00, real=0.58 secs] {code}
                
> OutOfMemoryError in HDFSSink
> ----------------------------
>
>                 Key: FLUME-1326
>                 URL: https://issues.apache.org/jira/browse/FLUME-1326
>             Project: Flume
>          Issue Type: Bug
>    Affects Versions: v1.2.0
>            Reporter: Juhani Connolly
>            Priority: Critical
>
> We run a 3 node/1 collector test cluster pushing about 350events/sec per 
> node... Not really high stress, but just something to evaluate flume with.
> Consistently our collector has been dying because of an OOMError killing the 
> SinkRunner after running for about 30-40 hours(seems pretty consistent as 
> we've had it 3 times now). 
> Suspected cause would be a memory leak somewhere in HdfsSink. The feeder 
> nodes which run AvroSink instead of HdfsSink have been up and running for 
> about a week without restarts.
> flume-load/act-wap02/2012-06-26-17.1340697637324.tmp, packetSize=65557, 
> chunksPerPacket=127, bytesCurBlock=29731328
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> 2012-06-26 17:12:56,080 (SinkRunner-PollingRunner-DefaultSinkProcessor) 
> [ERROR - 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:411)] 
> process failed
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Arrays.copyOfRange(Arrays.java:3209)
>         at java.lang.String.<init>(String.java:215)
>         at java.lang.StringBuilder.toString(StringBuilder.java:430)
>         at 
> org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:306)
>         at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:367)
>         at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:619)
> Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Arrays.copyOfRange(Arrays.java:3209)
>         at java.lang.String.<init>(String.java:215)
>         at java.lang.StringBuilder.toString(StringBuilder.java:430)
>         at 
> org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:306)
>         at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:367)
>         at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to