[ 
https://issues.apache.org/jira/browse/FLUME-717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17614606#comment-17614606
 ] 

Ralph Goers commented on FLUME-717:
-----------------------------------

This is reported against Flume OG and appears to have been cloned 

> WAL data grows forever even though data is delivered in E2E
> -----------------------------------------------------------
>
>                 Key: FLUME-717
>                 URL: https://issues.apache.org/jira/browse/FLUME-717
>             Project: Flume
>          Issue Type: Bug
>          Components: Master, Node, Sinks+Sources
>    Affects Versions: 0.9.5
>            Reporter: Eric Sammer
>            Priority: Blocker
>
> With a heavy enough write load, it appears that the E2E agent WAL will get 
> into a state where data just gets constantly shuffled around between the 
> various directories / states (e.g. writing, logged, sending, sent). When this 
> happens, the WAL directories grow indefinitely until the disk is exhausted, 
> no matter how much data caused the problem.
> To reproduce:
> * Use the supplied config (or something similar).
> * Write to the agent source at a rate of > 1MB/s for a short burst (using 
> something like the provided generator below).
> * Note that data is delivered to the collectorSink but the agent WAL manager 
> constantly grows the data.
> The config:
> {code}
> n1 : execStream("tail -F datafile") | agentE2ESink("host", 12345);
> n2 : collectorSource(12345) | collectorSink("file://...", "n2-");
> {code}
> Generator:
> {code}
> perl -e 'while (1) { print $i++, "\n"; }' >> datafile
> {code}
> This looks and smells just like FLUME-430. I haven't yet examined the WAL or 
> destination data for duplicates / missing events.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@flume.apache.org
For additional commands, e-mail: issues-h...@flume.apache.org

Reply via email to