Hello Brock:
My flume's configuration is
agent_foo.sources = tailsource-1
agent_foo.channels = fileChannel
agent_foo.sinks = hdfsSink-1
agent_foo.sources.tailsource-1.type =
com.chinacache.cpis.sources.CpisAvroSource
agent_foo.sources.tailsource-1.channels=fileChannel
agent_foo.sources.tailsource-1.bind=221.130.18.90
agent_foo.sources.tailsource-1.port=4545
#agent_foo.sources.tailsource-1.threads=10
agent_foo.channels.fileChannel.type = file
agent_foo.channels.fileChannel.checkpointDir=/home/hadoop/checkpoint
agent_foo.channels.fileChannel.dataDirs=/data/cache2/data
agent_foo.channels.fileChannel.maxFileSize=524288000
#agent_foo.channels.fileChannel.capacity = 10000
agent_foo.sinks.hdfsSink-1.type = hdfs
agent_foo.sinks.hdfsSink-1.channel = fileChannel
agent_foo.sinks.hdfsSink-1.hdfs.path =
hdfs://CMN-NJ-2-579:9000/user/hadoop/fc_logs/%{month}/%{day}/%{deviceId}/%{hour}
#agent_foo.sinks.hdfsSink-1.hdfs.path =
hdfs://CMN-NJ-2-579:9000/user/hadoop/fc_logs/%{month}/%{day}/%{hour}/%{deviceId}
agent_foo.sinks.hdfsSink-1.hdfs.filePrefix = cpisfc-assess.log
agent_foo.sinks.hdfsSink-1.hdfs.rollInterval=600
agent_foo.sinks.hdfsSink-1.hdfs.rollCount=0
agent_foo.sinks.hdfsSink-1.hdfs.rollSize = 506870912
agent_foo.sinks.hdfsSink-1.hdfs.fileType=CompressedStream
agent_foo.sinks.hdfsSink-1.hdfs.codeC=gzip
agent_foo.sinks.hdfsSink-1.hdfs.writeFormat=Text
agent_foo.sinks.hdfsSink-1.hdfs.batchSize=1
agent_foo.sinks.hdfsSink-1.serializer=avro_event
And the java configuration was JAVA_OPTS="-Xms100m -Xmx2048m
-Dcom.sun.management.jmxremote";
Thank you very much!
My Name :
Yanzhi Liu
------------------ 原始邮件 ------------------
发件人: "Brock Noland"<[email protected]>;
发送时间: 2012年10月16日(星期二) 中午11:01
收件人: "user"<[email protected]>;
主题: Re: About file channel yum error
What is your channel capacity and configured agent heap size?
On Monday, October 15, 2012 at 9:19 PM, Yanzhi.liu wrote:
Hello everyone:
I am using the flume 1.3.0.But the flume clusters was working after two
days and the flume was stoped. And the flume.log was only error about the:
14 Oct 2012 15:29:17,018 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor]
(org.apache.flume.sink.hdfs.HDFSEventSink.process:448) - process failed
java.lang.OutOfMemoryError
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:338)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
at
org.apache.flume.channel.file.LogFile.readDelimitedBuffer(LogFile.java:486)
at
org.apache.flume.channel.file.LogFileV3$RandomReader.doGet(LogFileV3.java:258)
at
org.apache.flume.channel.file.LogFile$RandomReader.get(LogFile.java:298)
at org.apache.flume.channel.file.Log.get(Log.java:409)
at
org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
at
org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
at
org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:91)
at
org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:387)
at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)
So I want to how to avoid the error.
Thanks very much!
My Name:
Yanzhi Liu