I discovered the issue is likely not ActiveMQ, but is my code.  Essentially, 
the messages are fine and the journaling kills the old files --UNLESS-- I get a 
timeout along our camel routes, so I think I'm not flagging messages that time 
out as consumed messages (and there is nothing I need to do to recover from 
them).

I essentially tried to make our routes "fast fail" if an error occurs along the 
route, so all of my routes (all routes are inout) look like:

<context>
        <onException>
            <exception>java.lang.Throwable</camel:exception>
            <handled>
                <constant>true</camel:constant>
            </handled>
            <to:bean "fastFailHandler"/>
        </onException>
        <route>
                <from:entry point>
                <to:bean>
                <to:bean>
                <choice>
                        <when 1>
                                <to: bean>
                        </when1>
                        <when 2>
                                <to: queue>
                        </when 2>       
                </choice>
</context>
<context> <!-- in another bundle -->
                <route>
                <from: queue>
                ...  <!-- timeout occurs HERE>
        </route>
</context>

So when a timeout occurs along the queue (in the second context route) the 
messages pile up in the activemq kahadb files.  Is there an implicit dead 
letter queue created for messages that branch out like this?  I'm assuming what 
is happening is a message comes in, then it goes to the next queue, and the 
message times out sending a response back out through the original endpoint 
(creating a branch in the flow), meanwhile the message in the queue of the 
second context continues working, completes, and has no output vector. 

All of the routes I have checked online look like mine, with various tweaks of 
course, but it isn't obvious what tweak says "messages that time out should not 
pile up in the kahaDB".

Is there a flag or something I can set that essentially says, "messages that 
have no output vector get flagged as consumed automatically"?



Thanks again!
Zach Calvert




-----Original Message-----
From: Johan Edstrom [mailto:[email protected]] 
Sent: Monday, February 20, 2012 5:54 PM
To: [email protected]
Subject: Re: KahaDB Log Files Growing Unbounded

Okay, do this....
Install the activemq-web-console, look at where you have messages that are not 
consumed.
Un-Consumed messages with no expiry - cannot be "acked" right?

So they will keep a journal entry.
Also, you really should upgrade to 5.5.1.


/je

On Feb 20, 2012, at 4:47 PM, Calvert, Zach (Zach)** CTR ** wrote:

> I simply have
> <persistenceAdapter>
>            <kahaDB cleanupInterval="30000" journalMaxFileLength="32mb" 
> directory="${karaf.data}/activemq/default/kahadb"/>
>        </persistenceAdapter>
> 
> As my kahaDB configuration inside of the activemq-broker.xml file.  This 
> configuration still allows for indefinite growth.  
> 
> The sad thing is that I can fix this simply by shutting down ServiceMix and 
> deleting the data directory, but I'm trying to prevent an interruption of 
> service.
> 
> 
> 
> Zach Calvert
> 
> 
> -----Original Message-----
> From: Calvert, Zach (Zach)** CTR ** [mailto:[email protected]]
> Sent: Monday, February 20, 2012 5:34 PM
> To: [email protected]
> Subject: RE: KahaDB Log Files Growing Unbounded
> 
> Thank you for the reply Jon.  I added trace logging and see kahadb logs to 
> the tune of 
> 2012-02-20 17:26:45,707 [eckpoint Worker] DEBUG MessageDatabase               
>  - Checkpoint started.
> 2012-02-20 17:26:45,711 [eckpoint Worker] TRACE MessageDatabase               
>  - gc candidates after first tx:2, [1]
> 2012-02-20 17:26:45,711 [eckpoint Worker] TRACE MessageDatabase               
>  - gc candidates after dest:0:work:inout, [1]
> 2012-02-20 17:26:45,711 [eckpoint Worker] TRACE MessageDatabase               
>  - gc candidates after dest:0:processqueue:inout, [1]
> 2012-02-20 17:26:45,711 [eckpoint Worker] TRACE MessageDatabase               
>  - gc candidates after dest:0:org.apache.servicemix.jbi.cluster, [1]
> 2012-02-20 17:26:45,711 [eckpoint Worker] TRACE MessageDatabase               
>  - gc candidates after dest:0:ActiveMQ.DLQ, []
> 2012-02-20 17:26:45,711 [eckpoint Worker] TRACE MessageDatabase               
>  - gc candidates: []
> 2012-02-20 17:26:45,711 [eckpoint Worker] DEBUG MessageDatabase               
>  - Checkpoint done.
> 
> Which to me looks like there is not a lot of work hanging around, but there 
> are TONS of files still piling up with 32 megs of usage on each one.  I'm 
> using the ActiveMQ 5.4.2 bundles.  
> 
> Are there additional settings I should try?  I'm looking for docs/bug 
> reports/anything that can help me figure out how to keep this from grown 
> indefinitely.  What really stinks is that even after a restart, these files 
> persist.
> 
> 
> 
> 
> Thanks,
> Zach Calvert
> 
> 
> 
> -----Original Message-----
> From: Jon Anstey [mailto:[email protected]]
> Sent: Monday, February 20, 2012 1:42 PM
> To: [email protected]
> Subject: Re: KahaDB Log Files Growing Unbounded
> 
> If you just send messages to a queue and do not consume those 
> messages, then they would be kept around. Is this the case? You may 
> want to read this
> too:
> http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanu
> p.html
> 
> Cheers,
> Jon
> 
> On Mon, Feb 20, 2012 at 12:41 PM, Calvert, Zach (Zach)** CTR ** < 
> [email protected]> wrote:
> 
>> I am running some testing and discovered that our KahaDB (which the 
>> default configuration was left alone from the ServiceMix install) has 
>> log files growing unbounded:
>> ...
>> db-1099.log  db-507.log   db-627.log  db-747.log  db-867.log  db-987.log
>> db-1100.log  db-508.log   db-628.log  db-748.log  db-868.log  db-988.log
>> db-1101.log  db-509.log   db-629.log  db-749.log  db-869.log  db-989.log
>> ...
>> The configuration defaults according to 
>> http://activemq.apache.org/kahadb.html
>> Will allow the files to grow up to 32 mb each and has a cleanup 
>> running every 30000.  However, the log files are in the thousands and 
>> continue to grow.  Each of these files is 33 MB.
>> 
>> What is the configuration change needed to force KahaDB to clean up 
>> the log files?  According to the defaults, it looks like this should 
>> already be happening.  What am I doing wrong?
>> 
>> 
>> 
>> 
>> 
>> Thanks,
>> Zach Calvert
>> 
> 
> 
> 
> --
> Cheers,
> Jon
> ---------------
> FuseSource
> Email: [email protected]
> Web: fusesource.com
> Twitter: jon_anstey
> Blog: http://janstey.blogspot.com
> Author of Camel in Action: http://manning.com/ibsen

Reply via email to