You forgot to send the conf file.
Anyway, hdfs files are rotate similar to those of Log4j: either by time or
size whichever is hit first.

By default it rotates every 30 seconds or 1MB, so in your case I guess it's
hitting the 30 seconds limit.
Change the properties: hdfs.rollInterval and hdfs.rollSize

See more https://flume.apache.org/FlumeUserGuide.html#hdfs-sink

Regards,
Gonzalo


On 3 October 2015 at 13:11, Shiva Ram <[email protected]> wrote:

> My flume agent conf. file.
>
> *How to increase the output file size? Thanks.*
>
> *Thanks & Regards,*
>
> *Shiva Ram*
> *Website: http://datamaking.com <http://datamaking.com>Facebook Page:
> www.facebook.com/datamaking <http://www.facebook.com/datamaking>*
>
> On Sat, Oct 3, 2015 at 4:58 PM, Shiva Ram <[email protected]>
> wrote:
>
>> Hi
>>
>> I am using spooldir source, memory channel, hdfs sink to collect log
>> files and store into HDFS.
>>
>> When I run the flume agent, it is creating very very small files with
>> size 766 bytes.
>>
>> Input file: test.log [11.4 KB]
>> Output files: sales_web_log.1443871052640.log, etc.[all are very very
>> small files with size 766 bytes]
>>
>> *How to increase the output file size?*
>>
>> *Thanks & Regards,*
>>
>> *Shiva Ram*
>> *Website: http://datamaking.com <http://datamaking.com>Facebook Page:
>> www.facebook.com/datamaking <http://www.facebook.com/datamaking>*
>>
>
>

Reply via email to