Do you really need the *:* stuff in the date range subqueries? That may add to the execution time.
On Thu, Apr 29, 2010 at 9:52 AM, Erick Erickson <erickerick...@gmail.com> wrote: > Hmmm, what does the rest of your query look like? And does adding > &debugQuery=on show anything interesting? > > Best > Erick > > On Thu, Apr 29, 2010 at 6:54 AM, Jan Simon Winkelmann < > winkelm...@newsfactory.de> wrote: > >> > > ((valid_from:[* TO 2010-04-29T10:34:12Z]) AND >> > > (valid_till:[2010-04-29T10:34:12Z TO *])) OR ((*:* >> > > -valid_from:[* TO *]) AND (*:* -valid_till:[* TO *]))) >> > > >> > > I use the empty checks for datasets which do not have a >> > > valid from/till range. >> > > >> > > >> > > Is there any way to get this any faster? >> > >> > I can suggest you two things. >> > >> > 1-) valid_till:[* TO *] and valid_from:[* TO *] type queries can be >> > performance killer. You can create a new boolean field ( populated via >> > conditional copy or populated client side) that holds the information >> > whether valid_from exists or not. So that valid_till:[* TO *] can be >> > rewritten as valid_till_bool:true. >> >> That may be an idea, however i checked what happens when I simply leave >> them out. It does affect the performance but the query is still somewhere >> around 1 second. >> >> > 2-) If you are embedding these queries into q parameter, you can write >> > your clauses into (filter query) fq parameters so that they are cached. >> >> The problem here is, that the timestamp itself does change quite a bit and >> hence cannot be properly cached. It could be for a few seconds, but >> occasional response times of more than a second is still unacceptable for >> us. We need a solution that responds quickly ALL the time, not just most of >> the time. >> >> Thanks for your ideas though :) >> >> regards, >> Jan-Simon >> >> > -- Lance Norskog goks...@gmail.com