[
https://issues.apache.org/jira/browse/FLUME-2403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14032186#comment-14032186
]
liyunzhang commented on FLUME-2403:
-----------------------------------
I can not reproduce the bug according your configuration
my config:
a1.channels = c1
a1.sources = r1
a1.sinks = k1
a1.sources.r1.type=flume.custom.source.BigMessageSource
a1.sinks.k1.type = logger
a1.channels.c1.type = SPILLABLEMEMORY
a1.channels.c1.memoryCapacity = 10000
a1.channels.c1.overflowCapacity = 1000000
a1.channels.c1.byteCapacity = 800000
a1.channels.c1.checkpointDir = /mnt/flume/checkpoint
a1.channels.c1.dataDirs = /mnt/flume/data
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
I set JAVA_OPTS in flume-env.sh
JAVA_OPTS="-Xms256m -Xmx512m"
The size of message is about 500K:
#du -h /tmp/books.xml
456K /tmp/books.xml
I use books.xml as message. Every 10 seconds the source will send the
messsage. I ran the flume service for serveral hours but OOME does not appear.
> Spillable Memory Channel causes OOME for large messages.
> --------------------------------------------------------
>
> Key: FLUME-2403
> URL: https://issues.apache.org/jira/browse/FLUME-2403
> Project: Flume
> Issue Type: Bug
> Components: Channel
> Affects Versions: v1.5.0
> Reporter: Adam Gent
> Priority: Critical
>
> The spillable memory channel will fail rather badly on large messages.
> {code}
> Error while writing to required channel: FileChannel es1 { dataDirs:
> [/var/lib/flume/data] }
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2271)
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
> at
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
> at
> com.google.protobuf.CodedOutputStream.writeRawBytes(CodedOutputStream.java:984)
> at
> com.google.protobuf.CodedOutputStream.writeRawBytes(CodedOutputStream.java:905)
> at
> com.google.protobuf.CodedOutputStream.writeBytesNoTag(CodedOutputStream.java:386)
> at
> com.google.protobuf.CodedOutputStream.writeBytes(CodedOutputStream.java:229)
> at
> org.apache.flume.channel.file.proto.ProtosFactory$FlumeEvent.writeTo(ProtosFactory.java:6259)
> at
> com.google.protobuf.CodedOutputStream.writeMessageNoTag(CodedOutputStream.java:380)
> at
> com.google.protobuf.CodedOutputStream.writeMessage(CodedOutputStream.java:222)
> at
> org.apache.flume.channel.file.proto.ProtosFactory$Put.writeTo(ProtosFactory.java:4112)
> at
> com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:90)
> at org.apache.flume.channel.file.Put.writeProtos(Put.java:93)
> at
> org.apache.flume.channel.file.TransactionEventRecord.toByteBuffer(TransactionEventRecord.java:174)
> at org.apache.flume.channel.file.Log.put(Log.java:611)
> at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:458)
> at
> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
> at
> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.commitPutsToOverflow(SpillableMemoryChannel.java:490)
> at
> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.putCommit(SpillableMemoryChannel.java:480)
> at
> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.doCommit(SpillableMemoryChannel.java:401)
> at
> org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
> at
> org.apache.flume.channel.ChannelProcessor.processEvent(ChannelProcessor.java:267)
> at
> org.apache.flume.source.rabbitmq.RabbitMQSource.process(RabbitMQSource.java:162)
> at
> org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> My config:
> {code}
> agent.channels.es1.type = SPILLABLEMEMORY
> agent.channels.es1.memoryCapacity = 10000
> agent.channels.es1.overflowCapacity = 1000000
> agent.channels.es1.byteCapacity = 800000
> agent.channels.es1.checkpointDir = /var/lib/flume/checkpoint
> agent.channels.es1.dataDirs = /var/lib/flume/data
> {code}
> I haven't looked at the code but I have some concerns like why a
> ByteArrayOutputStream is being used instead of some other buffered stream
> directly to the file system? Perhaps its because of the transactional nature
> but I'm pretty sure you can write to the filesystem and rollback as Kafka and
> modern databases do this with fsync.
> One could argue that I should just raise the max heap but this message is
> coming from a RabbitMQ which had no issue holding on to the message (I
> believe the message is like 500K).
--
This message was sent by Atlassian JIRA
(v6.2#6252)