Huge tlogs seems to be a common problem. Should we make it flush automatically 
on huge file size? Could be configurable on the <updateLog> tag?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

23. mai 2013 kl. 14:03 skrev Erick Erickson <erickerick...@gmail.com>:

> Tangential to the issue you raise is that this is a huge tlog. It indicates 
> that
> you aren't doing a hard commit (openSearcher=false) very often. That
> operation will truncate your tlog which should speed recovery/startup.
> You're also chewing up some memory with a tlog that size since pointers
> to the tlog are kept for each document.
> 
> This comment doesn't address your comment about the change to
> ZkController, I'll leave that to someone who knows the code.
> 
> Best
> Erick
> 
> On Thu, May 23, 2013 at 3:14 AM, AlexeyK <lex.kudi...@gmail.com> wrote:
>> a small change: it's not an endless loop, but a painfully slow processing
>> which includes running a delete query and then insertion. Each document from
>> the tlog takes tens of seconds to process (more than 100 times slower than
>> during normal insertion process)
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://lucene.472066.n3.nabble.com/Solr-4-3-node-is-seen-as-active-in-Zk-while-in-recovery-mode-endless-recovery-tp4065549p4065551.html
>> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to