Hi TJ,
What's in the config and the log file on your "collector" machine? All this 
tells you is that the collector was not able to process the batch and the 
collector-tier Flume log file will likely have more information. 

Also, can you tell us what versions of Flume you are running on both tiers?

Thanks,
Mike


On Monday, May 14, 2012 at 6:54 PM, Tejinder Aulakh wrote:

> I'm getting the following error and not sure how to resolve it. The 
> configuration is provided below. The log file gets about 200 events/second. I 
> tried changing the memory channel config options but no luck. Any ideas how 
> to fix it?
> 
> I have an agent node which is tailing a log file and sending the events to 
> the collector server via avro sink. 
> 
> Error
> ====
> 2012-05-15 01:39:45,649 INFO lifecycle.LifecycleSupervisor: Starting 
> lifecycle supervisor 1
> 2012-05-15 01:39:45,650 INFO node.FlumeNode: Flume node starting - agent
> 2012-05-15 01:39:45,654 INFO nodemanager.DefaultLogicalNodeManager: Node 
> manager starting
> 2012-05-15 01:39:45,655 INFO lifecycle.LifecycleSupervisor: Starting 
> lifecycle supervisor 10
> 2012-05-15 01:39:45,655 INFO properties.PropertiesFileConfigurationProvider: 
> Configuration provider starting
> 2012-05-15 01:39:45,657 INFO properties.PropertiesFileConfigurationProvider: 
> Reloading configuration file:/etc/flume-ng/conf/flume.conf
> 2012-05-15 01:39:45,662 INFO properties.FlumeConfiguration: Post-validation 
> flume configuration contains configuation  for agents: [agent]
> 2012-05-15 01:39:45,681 INFO sink.DefaultSinkFactory: Creating instance of 
> sink myCustomAvroSink typeavro
> 2012-05-15 01:39:45,688 INFO nodemanager.DefaultLogicalNodeManager: Node 
> configuration change:{ sourceRunners:{myExecSource=EventDrivenSourceRunner: { 
> source:org.apache.flume.source.ExecSource@31f26605 }} 
> sinkRunners:{myCustomAvroSink=SinkRunner: { 
> policy:org.apache.flume.sink.DefaultSinkProcessor@2107ebe1 counterGroup:{ 
> name:null counters:{} } }} 
> channels:{myMemoryChannel=org.apache.flume.channel.MemoryChannel@f0f11b8} }
> 2012-05-15 01:39:45,689 INFO sink.AvroSink: Avro sink starting
> 2012-05-15 01:39:45,689 INFO source.ExecSource: Exec source starting with 
> command:tail -F /mnt/nginx/r.log
> 2012-05-15 01:39:55,077 ERROR api.NettyAvroRpcClient: Status (FAILED) is not 
> OK
> 2012-05-15 01:39:55,081 ERROR flume.SinkRunner: Unable to deliver event. 
> Exception follows.
> org.apache.flume.EventDeliveryException: Status (FAILED) is not OK
> at 
> org.apache.flume.api.NettyAvroRpcClient.waitForStatusOK(NettyAvroRpcClient.java:239)
> at 
> org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:221)
> at 
> org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:182)
> at org.apache.flume.sink.AvroSink.process(AvroSink.java:246)
> at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:65)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:662)
> 
> 
> 
> flume.conf
> =======
> agent.channels = myMemoryChannel
> agent.sources = myExecSource
> agent.sinks = myCustomAvroSink
> 
> # Define a memory channel called myMemoryChannel
> agent.channels.myMemoryChannel.type = memory
> agent.channels.myMemoryChannel.capacity = 1000000
> agent.channels.myMemoryChannel.transactionCapacity = 1000000
> agent.channels.myMemoryChannel.keep-alive = 30
> 
> # Define an exec source called myExecChannel to tail log file
> agent.sources.myExecSource.channels = myMemoryChannel
> agent.sources.myExecSource.type = exec
> agent.sources.myExecSource.command = tail -F /mnt/nginx/r.log
> 
> # Define a custom avro sink called myCustomAvroSink
> agent.sinks.myCustomAvroSink.channel = myMemoryChannel
> agent.sinks.myCustomAvroSink.type = avro
> agent.sinks.myCustomAvroSink.hostname = {CollectorIP}.amazonaws.com 
> (http://amazonaws.com)
> agent.sinks.myCustomAvroSink.port = 45678
> 
> 
> 
> flume-env.sh (http://flume-env.sh) 
> ==========
> JAVA_OPTS="-Xms7200m -Xmx8200m"
> 
> 
> Thanks,
> TJ 

Reply via email to