how to exactly query in the multitype

2009-02-06 Thread fei dong
I am using the text field type in the schema.xml provides basic text
search for English text. But, it has a surprise: the actual text given to
this field is not indexed as-is, and therefore searching for the raw text
may not work. If you search To Be Or Not To Be or  s.h.e.  in a text
field, none of these words will found this document. If I query K  B ( an
artist name) or, the result that only appears K is not what I suppose to.

It's better to sometimes retrieval that can omit the stop word and sometimes
keep the stop word. So that will not only index the  text after removing
stop word, but also index the raw text. How to support that requirement?


Need help with DictionaryCompoundWordTokenFilterFactory

2009-02-06 Thread Kraus, Ralf | pixelhouse GmbH

Hi,

Now I ran into another problem by using the 
solr.DictionaryCompoundWordTokenFilterFactory :-(
If I search for the german word Spargelcremesuppe which contains 
Spargel, Creme and Suppe SOLR will find way to many result.
Its because SOLR finds EVERY entry with either one of the three words in 
it :-(


Here is my schema.xml

   fieldType name=text_text class=solr.TextField 
positionIncrementGap=100

   analyzer
   tokenizer class=solr.WhitespaceTokenizerFactory/
   filter 
class=solr.DictionaryCompoundWordTokenFilterFactory

   dictionary=dictionary.txt
   minWordSize=5
   minSubwordSize=2
   maxSubwordSize=15
   onlyLongestMatch=true /
   filter class=solr.SynonymFilterFactory 
synonyms=synonyms.txt ignoreCase=true expand=true/
   filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords.txt/

   filter class=solr.LowerCaseFilterFactory/
   filter class=solr.RemoveDuplicatesTokenFilterFactory/
   filter class=solr.SnowballPorterFilterFactory 
language=German /

   /analyzer
   /fieldType

Any help ?

Greets,

Ralf Kraus


Fwd: Separate error logs

2009-02-06 Thread James Brady
OK, so java.util.logging has no way of sending error messages to a separate
log without writing your own Handler/Filter code.
If we just skip over the absurdity of that, and the rage it makes me feel,
what are my options here? What I'm looking for is for all records to go to
one file, and records of a ERROR level and above to go to a separate log.

Can I write my own Handlers/Filters, drop them on Jetty's classpath and
refer to them in my logging.properties? I.e. without rebuilding the whole
WAR, with my files added?

Is Solr 1.4 (and its nice SLF4J logging) in a state ready for intensive
production usage?

Thanks!
James

-- Forwarded message --
From: James Brady james.colin.br...@gmail.com
Date: 2009/1/30
Subject: Re: Separate error logs
To: solr-user@lucene.apache.org


Oh... I should really have found that myself :/
Thank you!

2009/1/30 Ryan McKinley ryan...@gmail.com

check:
 http://wiki.apache.org/solr/SolrLogging

 You configure whatever flavor logger to write error to a separate log



 On Jan 30, 2009, at 4:36 PM, James Brady wrote:

  Hi all,What's the best way for me to split Solr/Lucene error message off
 to
 a separate log?

 Thanks
 James





Re: [ANN] Lucid Imagination

2009-02-06 Thread Renaud Delbru

Hi,

I don't find any documentation about Solr Gaze. How can I use it ?

Thanks,
Regards
--
Renaud Delbru

Grant Ingersoll wrote:

Hi Lucene and Solr users,

As some of you may know, Yonik, Erik, Sami, Mark and I teamed up with
Marc Krellenstein to create a company to provide commercial
support (with SLAs), training, value-add components and services to
users of Lucene and Solr.  We have been relatively quiet up until now 
as we prepare our

offerings, but I am now pleased to announce the official launch of
Lucid Imagination.  You can find us at http://www.lucidimagination.com/
and learn more about us at http://www.lucidimagination.com/About/.

We have also launched a beta search site dedicated to searching all
things in the Lucene ecosystem: Lucene, Solr, Tika, Mahout, Nutch,
Droids, etc.  It's powered, of course, by Lucene via Solr (we'll
provide details in a separate message later about our setup.)  You can
search the Lucene family of websites, wikis, mail archives and JIRA 
issues all in one place.

To try it out, browse to http://www.lucidimagination.com/search/.

Any and all feedback is welcome at f...@lucidimagination.com.

Thanks,
Grant

--
Grant Ingersoll
http://www.lucidimagination.com/













Re: Fwd: Separate error logs

2009-02-06 Thread Marc Sturlese

Hey James,
Your log use case remains me to mine... I wanted to use different log files
fore different cores... for the moment there's no way to separate logs in
different files (as far as I know). I sorted it using log4j. What I do is
send the log data to the linux syslog (using syslog appender). Once the data
is there I just coded some scripts to send it wherever I want. You could
send your data to syslog and parse that file... and depending on if you find
the message TRACE, ERROR, DEBUG... just send that lines to the files you
choose.
This worked for me (not 100% coz there are log messages without the name of
the core) to separate logs depending on the core name without doing any
hack. In your case would work even better coz you always have the log level
in the message.
If someone knows any better way to do that please let me know...



James Brady-3 wrote:
 
 OK, so java.util.logging has no way of sending error messages to a
 separate
 log without writing your own Handler/Filter code.
 If we just skip over the absurdity of that, and the rage it makes me feel,
 what are my options here? What I'm looking for is for all records to go to
 one file, and records of a ERROR level and above to go to a separate log.
 
 Can I write my own Handlers/Filters, drop them on Jetty's classpath and
 refer to them in my logging.properties? I.e. without rebuilding the whole
 WAR, with my files added?
 
 Is Solr 1.4 (and its nice SLF4J logging) in a state ready for intensive
 production usage?
 
 Thanks!
 James
 
 -- Forwarded message --
 From: James Brady james.colin.br...@gmail.com
 Date: 2009/1/30
 Subject: Re: Separate error logs
 To: solr-user@lucene.apache.org
 
 
 Oh... I should really have found that myself :/
 Thank you!
 
 2009/1/30 Ryan McKinley ryan...@gmail.com
 
 check:
 http://wiki.apache.org/solr/SolrLogging

 You configure whatever flavor logger to write error to a separate log



 On Jan 30, 2009, at 4:36 PM, James Brady wrote:

  Hi all,What's the best way for me to split Solr/Lucene error message off
 to
 a separate log?

 Thanks
 James



 
 

-- 
View this message in context: 
http://www.nabble.com/Separate-error-logs-tp21756080p21876778.html
Sent from the Solr - User mailing list archive at Nabble.com.



Realtime Searching..

2009-02-06 Thread Michael Austin
I need to find a solution for our current social application. It's low
traffic now because we are early on.. However I'm expecting and want to be
prepaired to grow.  We have messages of different types that are
aggregated into one stream. Each of these message types have much different
data so that our main queries have a few unions and many joins.  I know that
Solr would work great for searching but we need a realtime system
(twitter-like) to view user updates.  I'm not interested in a few minutes
delay; I need something that will be fast updating and searchable and have n
columns per record/document. Can solor do this? what is Ocean?

Thanks


Issuing just a spell check query

2009-02-06 Thread Rupert Fiasco
The docs for the SpellCheckComponent say

The SpellCheckComponent is designed to provide inline spell checking
of queries without having to issue separate requests.

I would like to issue just a spell check query, I dont care about it
being inline and piggy-backing off a normal search query.

How would I achieve this?

I tried monkeying with making a new requestHandler but using class =
solr.SearchHandler always tries to do a normal search.

I succeeded in adding inline spell checking to the default request
handler by *adding*

 arr name=last-components
   strspellcheck/str
 /arr

to its requestHandler config - I would like to *remove* the default
search component - maybe by making a new request handler which just
does spell checking?

Is something like this possible?



  requestHandler name=/spellcheck class=solr.SearchHandler
!-- default values for query parameters --
 lst name=defaults
str name=spellcheck.count5/str
 /lst

 arr name=first-components
   strspellcheck/str
 /arr

 !-- remove default search component --
 arr name=remove-components
   strdefault/str
 /arr

  /requestHandler





Now, I can sort of achieve what I want by in fact a normal search but
then using a dummy value for my q parameter (for me 00
works) and then I get no search docs back, but I do get the spell
suggestions I want, driven by the spellcheck.q parameter.

But this seems very hacky and Solr is still having to run a search
against my dummy value.

A roundabout way of asking: how can I fire off *just* a spell check query?

Thanks in advance
-Rupert


Re: Realtime Searching..

2009-02-06 Thread Otis Gospodnetic
Michael,

The short answer is that Solr is not there yet, but will be.  Expect to see 
real-time search in Lucene first, then in Solr.
We have a case study about real-time search with Lucene in the upcoming Lucene 
in Action 2, but a more tightly integrated real-time search will be added to 
Lucene down the road (and then Solr).

In the mean time you can use the trick of one large and less frequently updated 
core and one small and more frequently updated core + distributed search across 
them.

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch





From: Michael Austin mausti...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 1:02:43 PM
Subject: Realtime Searching..

I need to find a solution for our current social application. It's low
traffic now because we are early on.. However I'm expecting and want to be
prepaired to grow.  We have messages of different types that are
aggregated into one stream. Each of these message types have much different
data so that our main queries have a few unions and many joins.  I know that
Solr would work great for searching but we need a realtime system
(twitter-like) to view user updates.  I'm not interested in a few minutes
delay; I need something that will be fast updating and searchable and have n
columns per record/document. Can solor do this? what is Ocean?

Thanks


Re: Issuing just a spell check query

2009-02-06 Thread Otis Gospodnetic
Rupert,

You could use the SpellCheck*Handler* to achieve this.


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch





From: Rupert Fiasco rufia...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 2:47:19 PM
Subject: Issuing just a spell check query

The docs for the SpellCheckComponent say

The SpellCheckComponent is designed to provide inline spell checking
of queries without having to issue separate requests.

I would like to issue just a spell check query, I dont care about it
being inline and piggy-backing off a normal search query.

How would I achieve this?

I tried monkeying with making a new requestHandler but using class =
solr.SearchHandler always tries to do a normal search.

I succeeded in adding inline spell checking to the default request
handler by *adding*

 arr name=last-components
   strspellcheck/str
 /arr

to its requestHandler config - I would like to *remove* the default
search component - maybe by making a new request handler which just
does spell checking?

Is something like this possible?



  requestHandler name=/spellcheck class=solr.SearchHandler
!-- default values for query parameters --
 lst name=defaults
str name=spellcheck.count5/str
 /lst

 arr name=first-components
   strspellcheck/str
 /arr

 !-- remove default search component --
 arr name=remove-components
   strdefault/str
 /arr

  /requestHandler





Now, I can sort of achieve what I want by in fact a normal search but
then using a dummy value for my q parameter (for me 00
works) and then I get no search docs back, but I do get the spell
suggestions I want, driven by the spellcheck.q parameter.

But this seems very hacky and Solr is still having to run a search
against my dummy value.

A roundabout way of asking: how can I fire off *just* a spell check query?

Thanks in advance
-Rupert


Re: exceeded limit of maxWarmingSearchers

2009-02-06 Thread Otis Gospodnetic
I'd say: Make sure you don't commit more frequently than the time it takes for 
your searcher to warm up, or else you risk searcher overlap and pile-up.


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch





From: Jon Drukman jdruk...@gmail.com
To: solr-user@lucene.apache.org
Sent: Thursday, February 5, 2009 11:36:13 AM
Subject: Re: exceeded limit of maxWarmingSearchers

Otis Gospodnetic wrote:
 Jon,
 
 If you can, don't commit on every update and that should help or fully solve 
 your problem.

is there any sort of heuristic or formula i can apply that can tell me when to 
commit?  put it in a cron job and fire it once per hour?

there are certain updates that are critical - we store privacy settings on 
certain data in the doc.  if the user says that document 10 is private, we need 
to have the update reflected immediately.  is there any way to have solr block 
everything until an update is committed?

-jsd-

Re: Issuing just a spell check query

2009-02-06 Thread Rupert Fiasco
But its deprecated (??)

-Rupert

On Fri, Feb 6, 2009 at 11:51 AM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
 Rupert,

 You could use the SpellCheck*Handler* to achieve this.


 Otis
 --
 Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch




 
 From: Rupert Fiasco rufia...@gmail.com
 To: solr-user@lucene.apache.org
 Sent: Friday, February 6, 2009 2:47:19 PM
 Subject: Issuing just a spell check query

 The docs for the SpellCheckComponent say

 The SpellCheckComponent is designed to provide inline spell checking
 of queries without having to issue separate requests.

 I would like to issue just a spell check query, I dont care about it
 being inline and piggy-backing off a normal search query.

 How would I achieve this?

 I tried monkeying with making a new requestHandler but using class =
 solr.SearchHandler always tries to do a normal search.

 I succeeded in adding inline spell checking to the default request
 handler by *adding*

 arr name=last-components
   strspellcheck/str
 /arr

 to its requestHandler config - I would like to *remove* the default
 search component - maybe by making a new request handler which just
 does spell checking?

 Is something like this possible?



  requestHandler name=/spellcheck class=solr.SearchHandler
!-- default values for query parameters --
 lst name=defaults
str name=spellcheck.count5/str
 /lst

 arr name=first-components
   strspellcheck/str
 /arr

 !-- remove default search component --
 arr name=remove-components
   strdefault/str
 /arr

  /requestHandler





 Now, I can sort of achieve what I want by in fact a normal search but
 then using a dummy value for my q parameter (for me 00
 works) and then I get no search docs back, but I do get the spell
 suggestions I want, driven by the spellcheck.q parameter.

 But this seems very hacky and Solr is still having to run a search
 against my dummy value.

 A roundabout way of asking: how can I fire off *just* a spell check query?

 Thanks in advance
 -Rupert



Re: Realtime Searching..

2009-02-06 Thread Michael Austin
Thanks Otis,

Is it possible to get my hands on the ability in lucene utilizing patches
before it is released to the public? (sorry to ask) - How close is it in the
source code if I didn't care about the documentation/packaging/etc..?  So
from what it sounds like, this would be a realtime store(with great
search) that could be used instead of a database or in conjunction? Is
it wrong to say it's similar to bigtable from google in keeping realtime
data in a non relational way but with a better search?

Thanks

On Fri, Feb 6, 2009 at 1:50 PM, Otis Gospodnetic otis_gospodne...@yahoo.com
 wrote:

 Michael,

 The short answer is that Solr is not there yet, but will be.  Expect to see
 real-time search in Lucene first, then in Solr.
 We have a case study about real-time search with Lucene in the upcoming
 Lucene in Action 2, but a more tightly integrated real-time search will be
 added to Lucene down the road (and then Solr).

 In the mean time you can use the trick of one large and less frequently
 updated core and one small and more frequently updated core + distributed
 search across them.

 Otis
 --
 Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch




 
 From: Michael Austin mausti...@gmail.com
 To: solr-user@lucene.apache.org
 Sent: Friday, February 6, 2009 1:02:43 PM
 Subject: Realtime Searching..

 I need to find a solution for our current social application. It's low
 traffic now because we are early on.. However I'm expecting and want to be
 prepaired to grow.  We have messages of different types that are
 aggregated into one stream. Each of these message types have much different
 data so that our main queries have a few unions and many joins.  I know
 that
 Solr would work great for searching but we need a realtime system
 (twitter-like) to view user updates.  I'm not interested in a few minutes
 delay; I need something that will be fast updating and searchable and have
 n
 columns per record/document. Can solor do this? what is Ocean?

 Thanks



Re: Issuing just a spell check query

2009-02-06 Thread Shalin Shekhar Mangar
On Sat, Feb 7, 2009 at 1:17 AM, Rupert Fiasco rufia...@gmail.com wrote:


 I would like to issue just a spell check query, I dont care about it
 being inline and piggy-backing off a normal search query.

 How would I achieve this?

 Now, I can sort of achieve what I want by in fact a normal search but
 then using a dummy value for my q parameter (for me 00
 works) and then I get no search docs back, but I do get the spell
 suggestions I want, driven by the spellcheck.q parameter.

 But this seems very hacky and Solr is still having to run a search
 against my dummy value.

 A roundabout way of asking: how can I fire off *just* a spell check query?


I don't think it is possible with SpellCheckComponent. But note that only
the first search pays the price, if you don't change (q, sort, rows, count)
then subsequent queries hit the cache.

So I'd suggest you search for a dummy value (or *:*), make
fl=your_unique_key (for minimal payload) and ignore the documents
returned.

-- 
Regards,
Shalin Shekhar Mangar.


Re: Realtime Searching..

2009-02-06 Thread Michael Austin
Just to back up and think about if solr/lucene realtime updating is what I
want to begin with..

Would this be something that a twitter type system might use to be more
scalable and fast? Let's just say that I have a site with as much message
traffic as twitter and I want to be able to update and search
fast/realtime.  Would this be the path you would initially send me?

For example, do you know of a system out there that does memcached type fast
caching and lookup but has the ability to look them up with sorting and
filtering?

Thanks


Searching on field A gives spurious highlights in field B

2009-02-06 Thread Jeffrey Baker
Hello all.  First post to the list.

I noticed that if I search for foo:blahhl.fl=bar, I get highlight
output for instances of blah in field bar.  Is there any way to
avoid that?  I'm using solr 1.3.


-jwb


Re: Searching on field A gives spurious highlights in field B

2009-02-06 Thread Mike Klaas


On 6-Feb-09, at 12:34 PM, Jeffrey Baker wrote:


Hello all.  First post to the list.


Welcome aboard.


I noticed that if I search for foo:blahhl.fl=bar, I get highlight
output for instances of blah in field bar.  Is there any way to
avoid that?  I'm using solr 1.3.


Try hl.requireFieldMatch=true

http://wiki.apache.org/solr/HighlightingParameters

-Mike


Re: Searching on field A gives spurious highlights in field B

2009-02-06 Thread Jeffrey Baker
On Fri, Feb 6, 2009 at 3:36 PM, Mike Klaas mike.kl...@gmail.com wrote:

 On 6-Feb-09, at 12:34 PM, Jeffrey Baker wrote:

 Hello all.  First post to the list.

 Welcome aboard.

 I noticed that if I search for foo:blahhl.fl=bar, I get highlight
 output for instances of blah in field bar.  Is there any way to
 avoid that?  I'm using solr 1.3.

 Try hl.requireFieldMatch=true

Thanks a lot.  I must have mentally skipped that one a dozen times.

-jwb


Re: Realtime Searching..

2009-02-06 Thread Otis Gospodnetic
Michael,

There is no single system that will provide Twitter like functionality.  You'd 
have to look into Lucene/Solr for searching, memcached (for example) for 
caching, maybe caching layer in front of Solr (e.g. varnish, squid, apache), 
something to store the data in (e.g. RDBMS, HBase, HDFS, depending on your 
precise needs), etc.


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch





From: Michael Austin mausti...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 3:18:44 PM
Subject: Re: Realtime Searching..

Just to back up and think about if solr/lucene realtime updating is what I
want to begin with..

Would this be something that a twitter type system might use to be more
scalable and fast? Let's just say that I have a site with as much message
traffic as twitter and I want to be able to update and search
fast/realtime.  Would this be the path you would initially send me?

For example, do you know of a system out there that does memcached type fast
caching and lookup but has the ability to look them up with sorting and
filtering?

Thanks


Searchers in single/multi-core environments

2009-02-06 Thread Mark Ferguson
Hello,

My apologies if this topic has already been discussed but I haven't been
able to find a lot of information in the wiki or mailing lists.

I am looking for more information about how searchers work in different
environments. Correct me if I'm mistaken, but my understanding is that in a
single core environment, there is one searcher for the one index which
handles all queries. When a commit occurs, a new searcher is opened up on
the index during the commit. The old searcher is still available until the
commit finishes, at which point the active searcher becomes the new one and
the old searcher is destroyed. This is the purpose of the
maxWarmingSearchers argument -- it is the total number of searchers that can
be open in memory at any given point. What I'm not sure about is how this
number could ever be greater than 2 in a single core environment -- unless
another commit is sent before the new searcher finishes warming?

What I'm also curious about is how searchers are handled in a multi-core
environment. Does the maxWarmSearchers argument apply to the entire set of
cores, or to each individual core? If the latter, how is this handled if
each core uses a different solrconfig.xml and has a different value for
maxWarmSearchers?

Thanks for any information that you can provide.

Mark


Re: Searchers in single/multi-core environments

2009-02-06 Thread Shalin Shekhar Mangar
On Sat, Feb 7, 2009 at 2:51 AM, Mark Ferguson mark.a.fergu...@gmail.comwrote:


 I am looking for more information about how searchers work in different
 environments. Correct me if I'm mistaken, but my understanding is that in a
 single core environment, there is one searcher for the one index which
 handles all queries. When a commit occurs, a new searcher is opened up on
 the index during the commit. The old searcher is still available until the
 commit finishes, at which point the active searcher becomes the new one and
 the old searcher is destroyed.


After commit is called, the postCommit/postOptimize hooks are executed (in
the same thread which called the commit). Then, in a new thread, a new
searcher is opened, auto-warming is performed, newSearcher event listeners
are executed and the new searcher is registered (i.e. it replaces the old
searcher). If useColdSearcher is true then the auto-warming is skipped. If
waitSearcher=false then the commit thread blocks until these operations
finish.

The old searcher is available until all it finishes all the requests it had
already received until the new searcher got registered.


 This is the purpose of the maxWarmingSearchers argument -- it is the total
 number of searchers that can
 be open in memory at any given point.


Not the total number of searchers but the total number of on-deck (warming)
searchers. So, total number of searchers will be maxWarmingSearchers + 1
(for the current registered searcher)


 What I'm not sure about is how this
 number could ever be greater than 2 in a single core environment -- unless
 another commit is sent before the new searcher finishes warming?


Correct. If warming or the newSearcher event listener takes a lot of time
and you call commit again, another searcher will be created.


 What I'm also curious about is how searchers are handled in a multi-core
 environment. Does the maxWarmSearchers argument apply to the entire set of
 cores, or to each individual core?


It applied to one core unless ofcourse, you are sharing the solrconfig.xml
with multiple cores. Also, if you call core reload, a new core is created
(with its own searcher) which replaces the old core.


 If the latter, how is this handled if
 each core uses a different solrconfig.xml and has a different value for
 maxWarmSearchers?


Each core maintains its configuration separately in memory. Configuration is
not shared between cores (except for the configuration in solr.xml)

Hope that helps.

-- 
Regards,
Shalin Shekhar Mangar.


Decrease warmupTime

2009-02-06 Thread Cheng Zhang
First, I'm new Solr. 

I have setup a Solr server and added some documents into it. I noticed that as 
I added more and more docs, the warmupTime became longer and longer. After 
added 400K docs, I can see the warmupTime is now about 1 minutes. Here is one 
log entry:

queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=6,evictions=0,size=6,warmupTime=56687,cumulative_lookups=2,
cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=2,cumulative_evictions=0}

If I try to insert more docs before warmupTime ends, I will get exception.

Is there any way to decrease this warmupTime? 

Thanks a lot,
Kevin



Re: Searchers in single/multi-core environments

2009-02-06 Thread Mark Ferguson

  What I'm also curious about is how searchers are handled in a multi-core
  environment. Does the maxWarmSearchers argument apply to the entire set
 of
  cores, or to each individual core?


 It applied to one core unless ofcourse, you are sharing the solrconfig.xml
 with multiple cores. Also, if you call core reload, a new core is created
 (with its own searcher) which replaces the old core.


Thanks very much for your time and explanation, it is a huge help. Just to
clarify that I am understanding correctly...

For example then, if I have 10 cores, and maxWarmSearchers is 2 for each
core, if I send a commit to all of them at once this will not cause any
exceptions, because each core handles its searchers separately?

Mark


Re: Searchers in single/multi-core environments

2009-02-06 Thread Shalin Shekhar Mangar
On Sat, Feb 7, 2009 at 3:44 AM, Mark Ferguson mark.a.fergu...@gmail.comwrote:


 For example then, if I have 10 cores, and maxWarmSearchers is 2 for each
 core, if I send a commit to all of them at once this will not cause any
 exceptions, because each core handles its searchers separately?


Correct. Though it is a good idea to have a small gap between commits on
each core so that you don't run into resource issues (depending on how
intensive is your postCommit/auto-warming/newSearcher)

-- 
Regards,
Shalin Shekhar Mangar.


Re: Decrease warmupTime

2009-02-06 Thread Yonik Seeley
On Fri, Feb 6, 2009 at 5:12 PM, Cheng Zhang zhangyongji...@yahoo.com wrote:
 Is there any way to decrease this warmupTime?

Go into solrconfig.xml and reduce (or eliminate) the autowarm counts
for the caches.

-Yonik


Re: Decrease warmupTime

2009-02-06 Thread Cheng Zhang
Hi Yonik,

I just changed the autowarmCount for queryResultCache but it did not work. In 
the log, it still shows warmupTime for autowarmCount is about 45 seconds.

  
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=6,evictions=0,size=6,warmupTime=44055,cumulative_lookups=1,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=1,cumulative_evictions=0}



Any other suggestion? 

Thanks a lot,
Kevin



- Original Message 
From: Yonik Seeley ysee...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 5:18:47 PM
Subject: Re: Decrease warmupTime

On Fri, Feb 6, 2009 at 5:12 PM, Cheng Zhang zhangyongji...@yahoo.com wrote:
 Is there any way to decrease this warmupTime?

Go into solrconfig.xml and reduce (or eliminate) the autowarm counts
for the caches.

-Yonik



Re: Decrease warmupTime

2009-02-06 Thread Otis Gospodnetic
Have you restarted Solr after you made the change?
Can you paste your query result cache config?

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch





From: Cheng Zhang zhangyongji...@yahoo.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 11:04:07 PM
Subject: Re: Decrease warmupTime

Hi Yonik,

I just changed the autowarmCount for queryResultCache but it did not work. In 
the log, it still shows warmupTime for autowarmCount is about 45 seconds.

  
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=6,evictions=0,size=6,warmupTime=44055,cumulative_lookups=1,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=1,cumulative_evictions=0}



Any other suggestion? 

Thanks a lot,
Kevin



- Original Message 
From: Yonik Seeley ysee...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 5:18:47 PM
Subject: Re: Decrease warmupTime

On Fri, Feb 6, 2009 at 5:12 PM, Cheng Zhang zhangyongji...@yahoo.com wrote:
 Is there any way to decrease this warmupTime?

Go into solrconfig.xml and reduce (or eliminate) the autowarm counts
for the caches.

-Yonik

Re: Decrease warmupTime

2009-02-06 Thread Cheng Zhang
I did restart the solr server. Here is the config.

filterCache
  class=solr.LRUCache
  size=512
  initialSize=512
  autowarmCount=128/

   !-- queryResultCache caches results of searches - ordered lists of
 document ids (DocList) based on a query, a sort, and the range
 of documents requested.  --
queryResultCache
  class=solr.LRUCache
  size=512
  initialSize=512
  autowarmCount=0/

  !-- documentCache caches Lucene Document objects (the stored fields for each 
document).
   Since Lucene internal document ids are transient, this cache will not be 
autowarmed.  --
documentCache
  class=solr.LRUCache
  size=512
  initialSize=512
  autowarmCount=0/

Thx.



- Original Message 
From: Otis Gospodnetic otis_gospodne...@yahoo.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 10:40:45 PM
Subject: Re: Decrease warmupTime

Have you restarted Solr after you made the change?
Can you paste your query result cache config?

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch





From: Cheng Zhang zhangyongji...@yahoo.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 11:04:07 PM
Subject: Re: Decrease warmupTime

Hi Yonik,

I just changed the autowarmCount for queryResultCache but it did not work. In 
the log, it still shows warmupTime for autowarmCount is about 45 seconds.

  
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=6,evictions=0,size=6,warmupTime=44055,cumulative_lookups=1,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=1,cumulative_evictions=0}



Any other suggestion? 

Thanks a lot,
Kevin



- Original Message 
From: Yonik Seeley ysee...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, February 6, 2009 5:18:47 PM
Subject: Re: Decrease warmupTime

On Fri, Feb 6, 2009 at 5:12 PM, Cheng Zhang zhangyongji...@yahoo.com wrote:
 Is there any way to decrease this warmupTime?

Go into solrconfig.xml and reduce (or eliminate) the autowarm counts
for the caches.

-Yonik