Well, it's quite hard to debug because the values listed on the stats page in 
the fieldCache section don't make much sense. Reducing precision with 
NOW/HOUR, however, does seem to make a difference.

It is hard (or impossible) to reproduce this is a test setup with the same 
index but without continues updates and without stress tests. Firing manual 
queries with different values for the bf parameter don't show any difference 
in the values listed on the stats page.

Someone cares to provide an explanation?

Thanks

On Wednesday 09 March 2011 22:21:19 Markus Jelsma wrote:
> Hi,
> 
> In one of the environments i'm working on (4 Solr 1.4.1. nodes with
> replication, 3+ million docs, ~5.5GB index size, high commit rate
> (~1-2min), high query rate (~50q/s), high number of updates
> (~1000docs/commit)) the nodes continuously run out of memory.
> 
> During development we frequently ran excessive stress tests and after
> tuning JVM and Solr settings all ran fine. A while ago i added the DisMax
> bq parameter for boosting recent documents, documents older than a day
> receive 50% less boost, similar to the example but with a much steeper
> slope. For clarity, i'm not using the ordinal function but the reciprocal
> version in the bq parameter which is warned against when using Solr 1.4.1
> according to the wiki.
> 
> This week we started the stress tests and nodes are going down again. I've
> reconfigured the nodes to have different settings for the bq parameter (or
> no bq parameter).
> 
> It seems the bq the cause of the misery.
> 
> Issue SOLR-1111 keeps popping up but it has not been resolved. Is there
> anyone who can confirm one of those patches fixes this issue before i
> waste hours of work finding out it doesn't? ;)
> 
> Am i correct when i assume that Lucene FieldCache entries are added for
> each unique function query?  In that case, every query is a unique cache
> entry because it operates on milliseconds. If all doesn't work i might be
> able to reduce precision by operating on minutes or even more instead of
> milli seconds. I, however, cannot use other nice math function in the ms()
> parameter so that might make things difficult.
> 
> However, date math seems available (NOW/HOUR) so i assume it would also
> work for <SOME_DATE_FIELD>/HOUR as well. This way i just might prevent
> useless entries.
> 
> My apologies for this long mail but it may prove useful for other users and
> hopefully we find the solution and can update the wiki to add this warning.
> 
> Cheers,

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350

Reply via email to