> > ((valid_from:[* TO 2010-04-29T10:34:12Z]) AND
> > (valid_till:[2010-04-29T10:34:12Z TO *])) OR ((*:*
> > -valid_from:[* TO *]) AND (*:* -valid_till:[* TO *])))
> >
> > I use the empty checks for datasets which do not have a
> > valid from/till range.
> >
> >
> > Is there any way to get this any faster?
> 
> I can suggest you two things.
> 
> 1-) valid_till:[* TO *] and valid_from:[* TO *] type queries can be
> performance killer. You can create a new boolean field ( populated via
> conditional copy or populated client side) that holds the information
> whether valid_from exists or not. So that valid_till:[* TO *] can be
> rewritten as valid_till_bool:true.

That may be an idea, however i checked what happens when I simply leave them 
out. It does affect the performance but the query is still somewhere around 1 
second.
 
> 2-) If you are embedding these queries into q parameter, you can write
> your clauses into (filter query) fq parameters so that they are cached.

The problem here is, that the timestamp itself does change quite a bit and 
hence cannot be properly cached. It could be for a few seconds, but occasional 
response times of more than a second is still unacceptable for us. We need a 
solution that responds quickly ALL the time, not just most of the time.

Thanks for your ideas though :)

regards,
Jan-Simon

Reply via email to