Hi Alex,

I don't see any errors/warning with DEBUG.

Here is the problem: if I tried to read a line larger than 2.5 MB and send via 
memory channel, the sink lost it and won't receive anything afterwards.

Here is the configuration I tried:

Source: exec. I tried both cat/tail command, and a Java program to read lines 
from file and print to stdout.
Channel: memory channel and file channel. I think capacity of memory channel is 
about the size of the queue of events, which won't affect the max size of an 
event.
Sink: both HDFS and Fill-Roll. They are able to receive short events before 
large events come.

Can you suggest what to configure? I'm using Flume 1.2.

Thanks,

Kevin

On Nov 23, 2012, at 6:34 PM, Alexander Alten-Lorenz <[email protected]> wrote:

Kevin,

This depends on your memory configuration. When you start flume with DEBUG did 
you got some errors / warning? 

Cheers,
 Alex

P.S. http://flume.apache.org/FlumeUserGuide.html#exec-source


On Nov 23, 2012, at 10:29 AM, Lichen <[email protected]> wrote:

> Hi all,
> 
> I tried to use exec source to tail a file of which each line is few MB. The 
> File-Roll/HDFS sinks received nothing in this case, while work fine with 
> other short lines. Is it because of the maximum size of a single event? What 
> parameter to set if it's configurable.
> 
> Thanks
> 
> Kevin

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF

Reply via email to