Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-18 Thread Erick Erickson
You're using Solr 1.4? That's long enough ago that I've mostly forgotten
the quirks there, sorry.

Erick


On Mon, Nov 18, 2013 at 2:38 AM, Loka lokanadham.ga...@zensar.in wrote:

 Hi Erickson,

 Thanks for your reply.

 Iam getting the following error with liferay tomcat.

 2013/11/18 07:29:42 ERROR
 com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:90)
 []

 [liferay/search_writer]
 org.apache.solr.common.SolrException: Not Found

 Not Found

 request:
 http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabinversion=2.2
 org.apache.solr.common.SolrException: Not Found

 Not Found

 request:
 http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabinversion=2.2
 at
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343)
 at
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183)
 at
 com.liferay.portal.search.solr.server.BasicAuthSolrServer.request(BasicAuthSolrServer.java:93)
 at
 org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217)
 at
 org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:97)
 at
 com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:83)
 at
 com.liferay.portal.search.solr.SolrIndexWriterImpl.updateDocument(SolrIndexWriterImpl.java:133)
 at
 com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.doReceive

 (SearchWriterMessageListener.java:86)
 at
 com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.receive

 (SearchWriterMessageListener.java:33)
 at
 com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:63)
 at
 com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:61)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:679)...




 Can you help me why Iam getting this error.

 PFA of the same error log and the solr-spring.xml files.

 Regards,
 Lokanadham Ganta

 - Original Message -
 From: Erick Erickson [via Lucene] 
 ml-node+s472066n4101220...@n3.nabble.com
 To: Loka lokanadham.ga...@zensar.in
 Sent: Friday, November 15, 2013 7:14:26 PM
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR

 That's a fine place to start. This form:

 maxTime${solr.autoCommit.maxTime:15000}/maxTime

 just allows you to define a sysvar to override the 15 second default, like
 java -Dsolr.autoCommti.maxTime=3 -jar start.jar


 On Fri, Nov 15, 2013 at 8:11 AM, Loka  [hidden email]  wrote:


  Hi Erickson,
 
  I have seen the following also from google, can I use the same in
  updateHandler class=solr.DirectUpdateHandler2:
  commitWithin softCommitfalse/softCommit/commitWithin
 
  If the above one is correct to add, can I add the below tags aslo in
  updateHandler class=solr.DirectUpdateHandler2 along with the above
 tag:
 
  autoCommit
  maxTime3/maxTime
/autoCommit
 
autoSoftCommit
  maxTime1/maxTime
/autoSoftCommit
 
 
  so finally, it will look like as:
 
  updateHandler class=solr.DirectUpdateHandler2
  autoCommit
  maxTime3/maxTime
/autoCommit
 
autoSoftCommit
  maxTime1/maxTime
/autoSoftCommit
  commitWithin softCommitfalse/softCommit/commitWithin
 
  /updateHandler
 
 
  Is the above one fine?
 
 
  Regards,
  Lokanadham Ganta
 
 
 
 
  - Original Message -
  From: Lokanadham Ganta  [hidden email] 
  To: Erick Erickson [via Lucene] 
  [hidden email] 
  Sent: Friday, November 15, 2013 6:33:20 PM
  Subject: Re: exceeded limit of maxWarmingSearchers ERROR
 
  Erickson,
 
  Thanks for your reply, before your reply, I have googled and found the
  following and added under
  updateHandler class=solr.DirectUpdateHandler2 tag of solrconfig.xml
  file.
 
 
  autoCommit
  maxTime3/maxTime
/autoCommit
 
autoSoftCommit
  maxTime1/maxTime
/autoSoftCommit
 
  Is the above one is fine or should I go strictly as per ypur suggestion
  means as below:
 
  autoCommit
 maxTime${solr.autoCommit.maxTime:15000}/maxTime
 openSearcherfalse/openSearcher
   /autoCommit
 
  !-- softAutoCommit is like autoCommit except it causes a
   'soft' commit which only ensures that changes are visible
   but does not ensure that data is synced to disk.  This is
   faster and more near-realtime friendly than a hard commit.
--
 
   autoSoftCommit
 maxTime${solr.autoSoftCommit.maxTime:1}/maxTime
   /autoSoftCommit
 
 
 
  Please confirm me.
 
  But how can I check how much autowarming that Iam doing, as of now I have
  set the maxWarmingSearchers as 2, should

Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-17 Thread Loka
Hi Erickson,

Thanks for your reply.

Iam getting the following error with liferay tomcat.

2013/11/18 07:29:42 ERROR
com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:90)
 []

[liferay/search_writer] 
org.apache.solr.common.SolrException: Not Found

Not Found

request: 
http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabinversion=2.2
org.apache.solr.common.SolrException: Not Found

Not Found

request: 
http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabinversion=2.2
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183)
at 
com.liferay.portal.search.solr.server.BasicAuthSolrServer.request(BasicAuthSolrServer.java:93)
at 
org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217)
at 
org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:97)
at 
com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:83)
at 
com.liferay.portal.search.solr.SolrIndexWriterImpl.updateDocument(SolrIndexWriterImpl.java:133)
at 
com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.doReceive

(SearchWriterMessageListener.java:86)
at 
com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.receive

(SearchWriterMessageListener.java:33)
at 
com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:63)
at 
com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:61)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)...




Can you help me why Iam getting this error.

PFA of the same error log and the solr-spring.xml files.

Regards,
Lokanadham Ganta

- Original Message -
From: Erick Erickson [via Lucene] ml-node+s472066n4101220...@n3.nabble.com
To: Loka lokanadham.ga...@zensar.in
Sent: Friday, November 15, 2013 7:14:26 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

That's a fine place to start. This form: 

maxTime${solr.autoCommit.maxTime:15000}/maxTime 

just allows you to define a sysvar to override the 15 second default, like 
java -Dsolr.autoCommti.maxTime=3 -jar start.jar 


On Fri, Nov 15, 2013 at 8:11 AM, Loka  [hidden email]  wrote: 


 Hi Erickson, 
 
 I have seen the following also from google, can I use the same in 
 updateHandler class=solr.DirectUpdateHandler2: 
 commitWithin     softCommitfalse/softCommit/commitWithin 
 
 If the above one is correct to add, can I add the below tags aslo in 
 updateHandler class=solr.DirectUpdateHandler2 along with the above tag: 
 
 autoCommit 
     maxTime3/maxTime 
   /autoCommit 
 
   autoSoftCommit 
     maxTime1/maxTime 
   /autoSoftCommit 
 
 
 so finally, it will look like as: 
 
 updateHandler class=solr.DirectUpdateHandler2 
 autoCommit 
     maxTime3/maxTime 
   /autoCommit 
 
   autoSoftCommit 
     maxTime1/maxTime 
   /autoSoftCommit 
 commitWithin     softCommitfalse/softCommit/commitWithin 
 
 /updateHandler 
 
 
 Is the above one fine? 
 
 
 Regards, 
 Lokanadham Ganta 
 
 
 
 
 - Original Message - 
 From: Lokanadham Ganta  [hidden email]  
 To: Erick Erickson [via Lucene]  
 [hidden email]  
 Sent: Friday, November 15, 2013 6:33:20 PM 
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
 
 Erickson, 
 
 Thanks for your reply, before your reply, I have googled and found the 
 following and added under 
 updateHandler class=solr.DirectUpdateHandler2 tag of solrconfig.xml 
 file. 
 
 
 autoCommit 
     maxTime3/maxTime 
   /autoCommit 
 
   autoSoftCommit 
     maxTime1/maxTime 
   /autoSoftCommit 
 
 Is the above one is fine or should I go strictly as per ypur suggestion 
 means as below: 
 
 autoCommit 
        maxTime${solr.autoCommit.maxTime:15000}/maxTime 
        openSearcherfalse/openSearcher 
      /autoCommit 
 
     !-- softAutoCommit is like autoCommit except it causes a 
          'soft' commit which only ensures that changes are visible 
          but does not ensure that data is synced to disk.  This is 
          faster and more near-realtime friendly than a hard commit. 
       -- 
 
      autoSoftCommit 
        maxTime${solr.autoSoftCommit.maxTime:1}/maxTime 
      /autoSoftCommit 
 
 
 
 Please confirm me. 
 
 But how can I check how much autowarming that Iam doing, as of now I have 
 set the maxWarmingSearchers as 2, should I increase the value? 
 
 
 Regards, 
 Lokanadham Ganta 
 
 
 - Original Message - 
 From: Erick Erickson [via Lucene]  
 [hidden email]  
 To: Loka  [hidden email]  
 Sent: Friday, November 15, 2013 6:07:12 PM

Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-15 Thread Erick Erickson
Where did you get that syntax? I've never seen that before.

What you want to configure is the maxTime in your
autocommit and autosoftcommit sections of solrconfig.xml,
as:

 autoCommit
   maxTime${solr.autoCommit.maxTime:15000}/maxTime
   openSearcherfalse/openSearcher
 /autoCommit

!-- softAutoCommit is like autoCommit except it causes a
 'soft' commit which only ensures that changes are visible
 but does not ensure that data is synced to disk.  This is
 faster and more near-realtime friendly than a hard commit.
  --

 autoSoftCommit
   maxTime${solr.autoSoftCommit.maxTime:1}/maxTime
 /autoSoftCommit

And you do NOT want to commit from your client.

Depending on how long autowarm takes, you may still see this error,
so check how much autowarming you're doing, i.e. how you've
configured the caches in solrconfig.xml and what you
have for newSearcher and firstSearcher.

I'd start with autowarm numbers of, maybe, 16 or so at most.

Best,
Erick


On Fri, Nov 15, 2013 at 2:46 AM, Loka lokanadham.ga...@zensar.in wrote:

 Hi Erickson,

 Thanks for your reply, basically, I used commitWithin tag as below in
 solrconfig.xml file


  requestHandler name=/update class=solr.XmlUpdateRequestHandler
lst name=defaults
  str name=update.processordedupe/str
/lst
 add commitWithin=1/
  /requestHandler

 updateRequestProcessorChain name=dedupe
 processor
 class=org.apache.solr.update.processor.SignatureUpdateProcessorFactory
   bool name=enabledtrue/bool
   str name=signatureFieldid/str
   bool name=overwriteDupesfalse/bool
   str name=fieldsname,features,cat/str
   str
 name=signatureClassorg.apache.solr.update.processor.Lookup3Signature/str
 /processor
 processor class=solr.LogUpdateProcessorFactory /
 processor class=solr.RunUpdateProcessorFactory /
   /updateRequestProcessorChain


 But this fix did not solve my problem, I mean I again got the same error.
 PFA of schema.xml and solrconfig.xml file, solr-spring.xml,
 messaging-spring.xml, can you sugest me where Iam doing wrong.

 Regards,
 Lokanadham Ganta










 - Original Message -
 From: Erick Erickson [via Lucene] 
 ml-node+s472066n4100924...@n3.nabble.com
 To: Loka lokanadham.ga...@zensar.in
 Sent: Thursday, November 14, 2013 8:38:17 PM
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR

 CommitWithin is either configured in solrconfig.xml for the
 autoCommit or autoSoftCommit tags as the maxTime tag. I
 recommend you do use this.

 The other way you can do it is if you're using SolrJ, one of the
 forms of the server.add() method takes a number of milliseconds
 to force a commit.

 You really, really do NOT want to use ridiculously short times for this
 like a few milliseconds. That will cause new searchers to be
 warmed, and when too many of them are warming at once you
 get this error.

 Seriously, make your commitWithin or autocommit parameters
 as long as you can, for many reasons.

 Here's a bunch of background:

 http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

 Best,
 Erick


 On Thu, Nov 14, 2013 at 5:13 AM, Loka  [hidden email]  wrote:


  Hi Naveen,
  Iam also getting the similar problem where I do not know how to use the
  commitWithin Tag, can you help me how to use commitWithin Tag. can you
 give
  me the example
 
 
 
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 





 If you reply to this email, your message will be added to the discussion
 below:
 http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html
 To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
 here .
 NAML

 solr-spring.xml (2K) 
 http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml
 messaging-spring.xml (2K) 
 http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml
 
 schema.xml (6K) 
 http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml
 solrconfig.xml (61K) 
 http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml




 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-15 Thread Loka
Erickson,

Thanks for your reply, before your reply, I have googled and found the 
following and added under 
updateHandler class=solr.DirectUpdateHandler2 tag of solrconfig.xml file.


autoCommit 
maxTime3/maxTime 
  /autoCommit

  autoSoftCommit 
maxTime1/maxTime 
  /autoSoftCommit

Is the above one is fine or should I go strictly as per ypur suggestion means 
as below:

autoCommit 
   maxTime${solr.autoCommit.maxTime:15000}/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit 

!-- softAutoCommit is like autoCommit except it causes a 
 'soft' commit which only ensures that changes are visible 
 but does not ensure that data is synced to disk.  This is 
 faster and more near-realtime friendly than a hard commit. 
  -- 

 autoSoftCommit 
   maxTime${solr.autoSoftCommit.maxTime:1}/maxTime 
 /autoSoftCommit 



Please confirm me.

But how can I check how much autowarming that Iam doing, as of now I have set 
the maxWarmingSearchers as 2, should I increase the value?


Regards,
Lokanadham Ganta


- Original Message -
From: Erick Erickson [via Lucene] ml-node+s472066n4101203...@n3.nabble.com
To: Loka lokanadham.ga...@zensar.in
Sent: Friday, November 15, 2013 6:07:12 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

Where did you get that syntax? I've never seen that before. 

What you want to configure is the maxTime in your 
autocommit and autosoftcommit sections of solrconfig.xml, 
as: 

     autoCommit 
       maxTime${solr.autoCommit.maxTime:15000}/maxTime 
       openSearcherfalse/openSearcher 
     /autoCommit 

    !-- softAutoCommit is like autoCommit except it causes a 
         'soft' commit which only ensures that changes are visible 
         but does not ensure that data is synced to disk.  This is 
         faster and more near-realtime friendly than a hard commit. 
      -- 

     autoSoftCommit 
       maxTime${solr.autoSoftCommit.maxTime:1}/maxTime 
     /autoSoftCommit 

And you do NOT want to commit from your client. 

Depending on how long autowarm takes, you may still see this error, 
so check how much autowarming you're doing, i.e. how you've 
configured the caches in solrconfig.xml and what you 
have for newSearcher and firstSearcher. 

I'd start with autowarm numbers of, maybe, 16 or so at most. 

Best, 
Erick 


On Fri, Nov 15, 2013 at 2:46 AM, Loka  [hidden email]  wrote: 


 Hi Erickson, 
 
 Thanks for your reply, basically, I used commitWithin tag as below in 
 solrconfig.xml file 
 
 
  requestHandler name=/update class=solr.XmlUpdateRequestHandler 
            lst name=defaults 
              str name=update.processordedupe/str 
            /lst 
             add commitWithin=1/ 
          /requestHandler 
 
 updateRequestProcessorChain name=dedupe 
     processor 
 class=org.apache.solr.update.processor.SignatureUpdateProcessorFactory 
       bool name=enabledtrue/bool 
       str name=signatureFieldid/str 
       bool name=overwriteDupesfalse/bool 
       str name=fieldsname,features,cat/str 
       str 
 name=signatureClassorg.apache.solr.update.processor.Lookup3Signature/str 
     /processor 
     processor class=solr.LogUpdateProcessorFactory / 
     processor class=solr.RunUpdateProcessorFactory / 
   /updateRequestProcessorChain 
 
 
 But this fix did not solve my problem, I mean I again got the same error. 
 PFA of schema.xml and solrconfig.xml file, solr-spring.xml, 
 messaging-spring.xml, can you sugest me where Iam doing wrong. 
 
 Regards, 
 Lokanadham Ganta 
 
 
 
 
 
 
 
 
 
 
 - Original Message - 
 From: Erick Erickson [via Lucene]  
 [hidden email]  
 To: Loka  [hidden email]  
 Sent: Thursday, November 14, 2013 8:38:17 PM 
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
 
 CommitWithin is either configured in solrconfig.xml for the 
 autoCommit or autoSoftCommit tags as the maxTime tag. I 
 recommend you do use this. 
 
 The other way you can do it is if you're using SolrJ, one of the 
 forms of the server.add() method takes a number of milliseconds 
 to force a commit. 
 
 You really, really do NOT want to use ridiculously short times for this 
 like a few milliseconds. That will cause new searchers to be 
 warmed, and when too many of them are warming at once you 
 get this error. 
 
 Seriously, make your commitWithin or autocommit parameters 
 as long as you can, for many reasons. 
 
 Here's a bunch of background: 
 
 http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
  
 
 Best, 
 Erick 
 
 
 On Thu, Nov 14, 2013 at 5:13 AM, Loka  [hidden email]  wrote: 
 
 
  Hi Naveen, 
  Iam also getting the similar problem where I do not know how to use the 
  commitWithin Tag, can you help me how to use commitWithin Tag. can you 
 give 
  me the example 
  
  
  
  -- 
  View this message in context: 
  
 http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR

Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-15 Thread Loka
Hi Erickson,

I have seen the following also from google, can I use the same in 
updateHandler class=solr.DirectUpdateHandler2:
commitWithin softCommitfalse/softCommit/commitWithin

If the above one is correct to add, can I add the below tags aslo in 
updateHandler class=solr.DirectUpdateHandler2 along with the above tag:

autoCommit 
maxTime3/maxTime 
  /autoCommit

  autoSoftCommit 
maxTime1/maxTime 
  /autoSoftCommit


so finally, it will look like as:

updateHandler class=solr.DirectUpdateHandler2 
autoCommit 
maxTime3/maxTime 
  /autoCommit

  autoSoftCommit 
maxTime1/maxTime 
  /autoSoftCommit
commitWithin softCommitfalse/softCommit/commitWithin

/updateHandler


Is the above one fine?


Regards,
Lokanadham Ganta




- Original Message -
From: Lokanadham Ganta lokanadham.ga...@zensar.in
To: Erick Erickson [via Lucene] ml-node+s472066n4101203...@n3.nabble.com
Sent: Friday, November 15, 2013 6:33:20 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

Erickson,

Thanks for your reply, before your reply, I have googled and found the 
following and added under 
updateHandler class=solr.DirectUpdateHandler2 tag of solrconfig.xml file.


autoCommit 
maxTime3/maxTime 
  /autoCommit

  autoSoftCommit 
maxTime1/maxTime 
  /autoSoftCommit

Is the above one is fine or should I go strictly as per ypur suggestion means 
as below:

autoCommit 
   maxTime${solr.autoCommit.maxTime:15000}/maxTime 
   openSearcherfalse/openSearcher 
 /autoCommit 

!-- softAutoCommit is like autoCommit except it causes a 
 'soft' commit which only ensures that changes are visible 
 but does not ensure that data is synced to disk.  This is 
 faster and more near-realtime friendly than a hard commit. 
  -- 

 autoSoftCommit 
   maxTime${solr.autoSoftCommit.maxTime:1}/maxTime 
 /autoSoftCommit 



Please confirm me.

But how can I check how much autowarming that Iam doing, as of now I have set 
the maxWarmingSearchers as 2, should I increase the value?


Regards,
Lokanadham Ganta


- Original Message -
From: Erick Erickson [via Lucene] ml-node+s472066n4101203...@n3.nabble.com
To: Loka lokanadham.ga...@zensar.in
Sent: Friday, November 15, 2013 6:07:12 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

Where did you get that syntax? I've never seen that before. 

What you want to configure is the maxTime in your 
autocommit and autosoftcommit sections of solrconfig.xml, 
as: 

     autoCommit 
       maxTime${solr.autoCommit.maxTime:15000}/maxTime 
       openSearcherfalse/openSearcher 
     /autoCommit 

    !-- softAutoCommit is like autoCommit except it causes a 
         'soft' commit which only ensures that changes are visible 
         but does not ensure that data is synced to disk.  This is 
         faster and more near-realtime friendly than a hard commit. 
      -- 

     autoSoftCommit 
       maxTime${solr.autoSoftCommit.maxTime:1}/maxTime 
     /autoSoftCommit 

And you do NOT want to commit from your client. 

Depending on how long autowarm takes, you may still see this error, 
so check how much autowarming you're doing, i.e. how you've 
configured the caches in solrconfig.xml and what you 
have for newSearcher and firstSearcher. 

I'd start with autowarm numbers of, maybe, 16 or so at most. 

Best, 
Erick 


On Fri, Nov 15, 2013 at 2:46 AM, Loka  [hidden email]  wrote: 


 Hi Erickson, 
 
 Thanks for your reply, basically, I used commitWithin tag as below in 
 solrconfig.xml file 
 
 
  requestHandler name=/update class=solr.XmlUpdateRequestHandler 
            lst name=defaults 
              str name=update.processordedupe/str 
            /lst 
             add commitWithin=1/ 
          /requestHandler 
 
 updateRequestProcessorChain name=dedupe 
     processor 
 class=org.apache.solr.update.processor.SignatureUpdateProcessorFactory 
       bool name=enabledtrue/bool 
       str name=signatureFieldid/str 
       bool name=overwriteDupesfalse/bool 
       str name=fieldsname,features,cat/str 
       str 
 name=signatureClassorg.apache.solr.update.processor.Lookup3Signature/str 
     /processor 
     processor class=solr.LogUpdateProcessorFactory / 
     processor class=solr.RunUpdateProcessorFactory / 
   /updateRequestProcessorChain 
 
 
 But this fix did not solve my problem, I mean I again got the same error. 
 PFA of schema.xml and solrconfig.xml file, solr-spring.xml, 
 messaging-spring.xml, can you sugest me where Iam doing wrong. 
 
 Regards, 
 Lokanadham Ganta 
 
 
 
 
 
 
 
 
 
 
 - Original Message - 
 From: Erick Erickson [via Lucene]  
 [hidden email]  
 To: Loka  [hidden email]  
 Sent: Thursday, November 14, 2013 8:38:17 PM 
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
 
 CommitWithin is either configured in solrconfig.xml for the 
 autoCommit or autoSoftCommit tags as the maxTime tag. I 
 recommend you do use this. 
 
 The other

Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-15 Thread Erick Erickson
That's a fine place to start. This form:

maxTime${solr.autoCommit.maxTime:15000}/maxTime

just allows you to define a sysvar to override the 15 second default, like
java -Dsolr.autoCommti.maxTime=3 -jar start.jar


On Fri, Nov 15, 2013 at 8:11 AM, Loka lokanadham.ga...@zensar.in wrote:

 Hi Erickson,

 I have seen the following also from google, can I use the same in
 updateHandler class=solr.DirectUpdateHandler2:
 commitWithin softCommitfalse/softCommit/commitWithin

 If the above one is correct to add, can I add the below tags aslo in
 updateHandler class=solr.DirectUpdateHandler2 along with the above tag:

 autoCommit
 maxTime3/maxTime
   /autoCommit

   autoSoftCommit
 maxTime1/maxTime
   /autoSoftCommit


 so finally, it will look like as:

 updateHandler class=solr.DirectUpdateHandler2
 autoCommit
 maxTime3/maxTime
   /autoCommit

   autoSoftCommit
 maxTime1/maxTime
   /autoSoftCommit
 commitWithin softCommitfalse/softCommit/commitWithin

 /updateHandler


 Is the above one fine?


 Regards,
 Lokanadham Ganta




 - Original Message -
 From: Lokanadham Ganta lokanadham.ga...@zensar.in
 To: Erick Erickson [via Lucene] 
 ml-node+s472066n4101203...@n3.nabble.com
 Sent: Friday, November 15, 2013 6:33:20 PM
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR

 Erickson,

 Thanks for your reply, before your reply, I have googled and found the
 following and added under
 updateHandler class=solr.DirectUpdateHandler2 tag of solrconfig.xml
 file.


 autoCommit
 maxTime3/maxTime
   /autoCommit

   autoSoftCommit
 maxTime1/maxTime
   /autoSoftCommit

 Is the above one is fine or should I go strictly as per ypur suggestion
 means as below:

 autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
  /autoCommit

 !-- softAutoCommit is like autoCommit except it causes a
  'soft' commit which only ensures that changes are visible
  but does not ensure that data is synced to disk.  This is
  faster and more near-realtime friendly than a hard commit.
   --

  autoSoftCommit
maxTime${solr.autoSoftCommit.maxTime:1}/maxTime
  /autoSoftCommit



 Please confirm me.

 But how can I check how much autowarming that Iam doing, as of now I have
 set the maxWarmingSearchers as 2, should I increase the value?


 Regards,
 Lokanadham Ganta


 - Original Message -
 From: Erick Erickson [via Lucene] 
 ml-node+s472066n4101203...@n3.nabble.com
 To: Loka lokanadham.ga...@zensar.in
 Sent: Friday, November 15, 2013 6:07:12 PM
 Subject: Re: exceeded limit of maxWarmingSearchers ERROR

 Where did you get that syntax? I've never seen that before.

 What you want to configure is the maxTime in your
 autocommit and autosoftcommit sections of solrconfig.xml,
 as:

  autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
  /autoCommit

 !-- softAutoCommit is like autoCommit except it causes a
  'soft' commit which only ensures that changes are visible
  but does not ensure that data is synced to disk.  This is
  faster and more near-realtime friendly than a hard commit.
   --

  autoSoftCommit
maxTime${solr.autoSoftCommit.maxTime:1}/maxTime
  /autoSoftCommit

 And you do NOT want to commit from your client.

 Depending on how long autowarm takes, you may still see this error,
 so check how much autowarming you're doing, i.e. how you've
 configured the caches in solrconfig.xml and what you
 have for newSearcher and firstSearcher.

 I'd start with autowarm numbers of, maybe, 16 or so at most.

 Best,
 Erick


 On Fri, Nov 15, 2013 at 2:46 AM, Loka  [hidden email]  wrote:


  Hi Erickson,
 
  Thanks for your reply, basically, I used commitWithin tag as below in
  solrconfig.xml file
 
 
   requestHandler name=/update class=solr.XmlUpdateRequestHandler
 lst name=defaults
   str name=update.processordedupe/str
 /lst
  add commitWithin=1/
   /requestHandler
 
  updateRequestProcessorChain name=dedupe
  processor
  class=org.apache.solr.update.processor.SignatureUpdateProcessorFactory
bool name=enabledtrue/bool
str name=signatureFieldid/str
bool name=overwriteDupesfalse/bool
str name=fieldsname,features,cat/str
str
 
 name=signatureClassorg.apache.solr.update.processor.Lookup3Signature/str
  /processor
  processor class=solr.LogUpdateProcessorFactory /
  processor class=solr.RunUpdateProcessorFactory /
/updateRequestProcessorChain
 
 
  But this fix did not solve my problem, I mean I again got the same error.
  PFA of schema.xml and solrconfig.xml file, solr-spring.xml,
  messaging-spring.xml, can you sugest me where Iam doing wrong.
 
  Regards,
  Lokanadham Ganta
 
 
 
 
 
 
 
 
 
 
  - Original Message -
  From: Erick

Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-14 Thread Loka
Hi Naveen,
Iam also getting the similar problem where I do not know how to use the
commitWithin Tag, can you help me how to use commitWithin Tag. can you give
me the example



--
View this message in context: 
http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-14 Thread Erick Erickson
CommitWithin is either configured in solrconfig.xml for the
autoCommit or autoSoftCommit tags as the maxTime tag. I
recommend you do use this.

The other way you can do it is if you're using SolrJ, one of the
forms of the server.add() method takes a number of milliseconds
to force a commit.

You really, really do NOT want to use ridiculously short times for this
like a few milliseconds. That will cause new searchers to be
warmed, and when too many of them are warming at once you
get this error.

Seriously, make your commitWithin or autocommit parameters
as long as you can, for many reasons.

Here's a bunch of background:
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick


On Thu, Nov 14, 2013 at 5:13 AM, Loka lokanadham.ga...@zensar.in wrote:

 Hi Naveen,
 Iam also getting the similar problem where I do not know how to use the
 commitWithin Tag, can you help me how to use commitWithin Tag. can you give
 me the example



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: exceeded limit of maxWarmingSearchers ERROR

2013-11-14 Thread Loka
Hi Erickson,

Thanks for your reply, basically, I used commitWithin tag as below in 
solrconfig.xml file


 requestHandler name=/update class=solr.XmlUpdateRequestHandler
   lst name=defaults
 str name=update.processordedupe/str
   /lst
add commitWithin=1/
 /requestHandler

updateRequestProcessorChain name=dedupe
processor 
class=org.apache.solr.update.processor.SignatureUpdateProcessorFactory
  bool name=enabledtrue/bool
  str name=signatureFieldid/str
  bool name=overwriteDupesfalse/bool
  str name=fieldsname,features,cat/str
  str 
name=signatureClassorg.apache.solr.update.processor.Lookup3Signature/str
/processor
processor class=solr.LogUpdateProcessorFactory /
processor class=solr.RunUpdateProcessorFactory /
  /updateRequestProcessorChain


But this fix did not solve my problem, I mean I again got the same error.
PFA of schema.xml and solrconfig.xml file, solr-spring.xml, 
messaging-spring.xml, can you sugest me where Iam doing wrong.

Regards,
Lokanadham Ganta










- Original Message -
From: Erick Erickson [via Lucene] ml-node+s472066n4100924...@n3.nabble.com
To: Loka lokanadham.ga...@zensar.in
Sent: Thursday, November 14, 2013 8:38:17 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

CommitWithin is either configured in solrconfig.xml for the 
autoCommit or autoSoftCommit tags as the maxTime tag. I 
recommend you do use this. 

The other way you can do it is if you're using SolrJ, one of the 
forms of the server.add() method takes a number of milliseconds 
to force a commit. 

You really, really do NOT want to use ridiculously short times for this 
like a few milliseconds. That will cause new searchers to be 
warmed, and when too many of them are warming at once you 
get this error. 

Seriously, make your commitWithin or autocommit parameters 
as long as you can, for many reasons. 

Here's a bunch of background: 
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
 

Best, 
Erick 


On Thu, Nov 14, 2013 at 5:13 AM, Loka  [hidden email]  wrote: 


 Hi Naveen, 
 Iam also getting the similar problem where I do not know how to use the 
 commitWithin Tag, can you help me how to use commitWithin Tag. can you give 
 me the example 
 
 
 
 -- 
 View this message in context: 
 http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
  
 Sent from the Solr - User mailing list archive at Nabble.com. 
 





If you reply to this email, your message will be added to the discussion below: 
http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html
 
To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click here . 
NAML 

solr-spring.xml (2K) 
http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml
messaging-spring.xml (2K) 
http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml
schema.xml (6K) 
http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml
solrconfig.xml (61K) 
http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml




--
View this message in context: 
http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-17 Thread Naveen Gupta
Hi Nagendra,

Thanks a lot .. i will start working on NRT today.. meanwhile old settings
(increased warmSearcher in Master) have not given me trouble till now ..

but NRT will be more suitable to us ... Will work on that one and will
analyze the performance and share with you.

Thanks
Naveen

2011/8/17 Nagendra Nagarajayya nnagaraja...@transaxtions.com

 Naveen:

 See below:

 *NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a

 document to become searchable*. Any document that you add through update
 becomes  immediately searchable. So no need to commit from within your
 update client code.  Since there is no commit, the cache does not have to
 be
 cleared or the old searchers closed or  new searchers opened, and warmed
 (error that you are facing).


 Looking at the link which you mentioned is clearly what we wanted. But the
 real thing is that you have RA does need a commit for  a document to
 become
 searchable (please take a look at bold sentence) .


 Yes, as said earlier you do not need a commit. A document becomes
 searchable as soon as you add it. Below is an example of adding a document
 with curl (this from the wiki at http://solr-ra.tgels.com/wiki/**
 en/Near_Real_Time_Search_ver_**3.xhttp://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x
 ):

 curl http://localhost:8983/solr/**update/csv?stream.file=/tmp/**
 x1.csvencapsulator=%1fhttp://localhost:8983/solr/update/csv?stream.file=/tmp/x1.csvencapsulator=%1f
 


 There is no commit included. The contents of the document become
 immediately searchable.


  In future, for more loads, can it cater to Master Slave (Replication) and
 etc to scale and perform better? If yes, we would like to go for NRT and
 looking at the performance described in the article is acceptable. We were
 expecting the same real time performance for a single user.


 There are no changes to Master/Slave (replication) process. So any changes
 you have currently will work as before or if you enable replication later,
 it should still work as without NRT.


  What about multiple users, should we wait for 1-2 secs before calling the
 curl request to make SOLR perform better. Or internally it will handle
 with
 multiple request (multithreaded and etc).


 Again for updating documents, you do not have to change your current
 process or code. Everything remains the same, except that if you were
 including commit, you do not include commit in your update statements. There
 is no change to the existing update process so internally it will not queue
 or multi-thread updates. It is as in existing Solr functionality, there no
 changes to the existing setup.

 Regarding perform better, in the Wiki paper  every update through curl adds
 (streams) 500 documents. So you could take this approach. (this was
 something that I chose randomly to test the performance but seems to be
 good)


  What would be doc size (10,000 docs) to allow JVM perform better? Have you
 done any kind of benchmarking in terms of multi threaded and multi user
 for
 NRT and also JVM tuning in terms of SOLR sever performance. Any kind of
 performance analysis would help us to decide quickly to switch over to
 NRT.


 The performance discussed in the wiki paper uses the MBArtists index. The
 MBArtists index is the index used as one of the examples in the book, Solr
 1.4 Enterprise Search Server. You can download and build this index if you
 have the book or can also download the contents from musicbrainz.org.
  Each doc maybe about 100 bytes and has about 7 fields. Performance with
 wikipedia's xml dump, commenting out skipdoc field (include redirects) in
 the dataconfig.xml [ dataimport handler ], the update performance is about
 15000 docs / sec (100 million docs), with the skipdoc enabled (does not skip
 redirects), the performance is about 1350 docs / sec [ time spent mostly
 converting validating/xml  than actual update ] (about 11 million docs ).
  Documents in wikipedia can be quite big, at least avg size of about
 2500-5000 bytes or more.

 I would suggest that you download and give NRT with Apache Solr 3.3 and
 RankingAlgorithm a try and get a feel of it as this would be the best way to
 see how your config works with it.


  Questions in terms for switching over to NRT,


 1.Should we upgrade to SOLR 4.x ?

 2. Any benchmarking (10,000 docs/secs).  The question here is more
 specific

 the detail of individual doc (fields, number of fields, fields size,
 parameters affecting performance with faceting or w/o faceting)


 Please see the MBArtists index as discussed above.



  3. What about multiple users ?

 A user in real time might be having an large doc size of .1 million. How
 to
 break and analyze which one is better (though it is our task to do). But
 still any kind of break up will help us. Imagine a user inbox.


 You maybe able to stream the documents in a set as in the example in the
 wiki. The example streams 500 documents at a time. The wiki paper has an
 example of a 

Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-16 Thread Naveen Gupta
Nagendra

You wrote,

Naveen:

*NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable*. Any document that you add through update
becomes  immediately searchable. So no need to commit from within your
update client code.  Since there is no commit, the cache does not have to be
cleared or the old searchers closed or  new searchers opened, and warmed
(error that you are facing).


Looking at the link which you mentioned is clearly what we wanted. But the
real thing is that you have RA does need a commit for  a document to become
searchable (please take a look at bold sentence) .

In future, for more loads, can it cater to Master Slave (Replication) and
etc to scale and perform better? If yes, we would like to go for NRT and
looking at the performance described in the article is acceptable. We were
expecting the same real time performance for a single user.

What about multiple users, should we wait for 1-2 secs before calling the
curl request to make SOLR perform better. Or internally it will handle with
multiple request (multithreaded and etc).

What would be doc size (10,000 docs) to allow JVM perform better? Have you
done any kind of benchmarking in terms of multi threaded and multi user for
NRT and also JVM tuning in terms of SOLR sever performance. Any kind of
performance analysis would help us to decide quickly to switch over to NRT.

Questions in terms for switching over to NRT,


1.Should we upgrade to SOLR 4.x ?

2. Any benchmarking (10,000 docs/secs).  The question here is more specific

the detail of individual doc (fields, number of fields, fields size,
parameters affecting performance with faceting or w/o faceting)

3. What about multiple users ?

A user in real time might be having an large doc size of .1 million. How to
break and analyze which one is better (though it is our task to do). But
still any kind of break up will help us. Imagine a user inbox.

4. JVM tuning and performance result based on Multithreaded environment.

5. Machine Details (RAM, CPU, and settings from SOLR perspective).

Hoping that you are getting my point. We want to benchmark the performance.
If you can involve me in your group, that would be great.

Thanks
Naveen



2011/8/15 Nagendra Nagarajayya nnagaraja...@transaxtions.com

 Bill:

 I did look at Marks performance tests. Looks very interesting.

 Here is the Apacle Solr 3.3 with RankingAlgorithm NRT performance:
 http://solr-ra.tgels.com/wiki/**en/Near_Real_Time_Search_ver_**3.xhttp://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x


 Regards

 - Nagendra Nagarajayya
 http://solr-ra.tgels.org
 http://rankingalgorithm.tgels.**org http://rankingalgorithm.tgels.org



 On 8/14/2011 7:47 PM, Bill Bell wrote:

 I understand.

 Have you looked at Mark's patch? From his performance tests, it looks
 pretty good.

 When would RA work better?

 Bill


 On 8/14/11 8:40 PM, Nagendra Nagarajayyannagarajayya@**
 transaxtions.com nnagaraja...@transaxtions.com
 wrote:

  Bill:

 The technical details of the NRT implementation in Apache Solr with
 RankingAlgorithm (SOLR-RA) is available here:

 http://solr-ra.tgels.com/**papers/NRT_Solr_**RankingAlgorithm.pdfhttp://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf

 (Some changes for Solr 3.x, but for most it is as above)

 Regarding support for 4.0 trunk, should happen sometime soon.

 Regards

 - Nagendra Nagarajayya
 http://solr-ra.tgels.org
 http://rankingalgorithm.tgels.**org http://rankingalgorithm.tgels.org





 On 8/14/2011 7:11 PM, Bill Bell wrote:

 OK,

 I'll ask the elephant in the roomŠ.

 What is the difference between the new UpdateHandler from Mark and the
 SOLR-RA?

 The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?

 Pros/Cons?


 On 8/14/11 8:10 PM, Nagendra
 Nagarajayyannagarajayya@**transaxtions.comnnagaraja...@transaxtions.com
 
 wrote:

  Naveen:

 NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
 document to become searchable. Any document that you add through update
 becomes  immediately searchable. So no need to commit from within your
 update client code.  Since there is no commit, the cache does not have
 to be cleared or the old searchers closed or  new searchers opened, and
 warmed (error that you are facing).

 Regards

 - Nagendra Nagarajayya
 http://solr-ra.tgels.org
 http://rankingalgorithm.tgels.**orghttp://rankingalgorithm.tgels.org



 On 8/14/2011 10:37 AM, Naveen Gupta wrote:

 Hi Mark/Erick/Nagendra,

 I was not very confident about NRT at that point of time, when we
 started
 project almost 1 year ago, definitely i would try NRT and see the
 performance.

 The current requirement was working fine till we were using
 commitWithin 10
 millisecs in the XMLDocument which we were posting to SOLR.

 But due to which, we were getting very poor performance (almost 3 mins
 for
 15,000 docs) per user. There are many paraller user committing to our
 SOLR.

 So we removed the commitWithin, and hence 

Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-16 Thread Nagendra Nagarajayya

Naveen:

See below:

*NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable*. Any document that you add through update
becomes  immediately searchable. So no need to commit from within your
update client code.  Since there is no commit, the cache does not have to be
cleared or the old searchers closed or  new searchers opened, and warmed
(error that you are facing).


Looking at the link which you mentioned is clearly what we wanted. But the
real thing is that you have RA does need a commit for  a document to become
searchable (please take a look at bold sentence) .



Yes, as said earlier you do not need a commit. A document becomes 
searchable as soon as you add it. Below is an example of adding a 
document with curl (this from the wiki at 
http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x):


curl 
http://localhost:8983/solr/update/csv?stream.file=/tmp/x1.csvencapsulator=%1f;


There is no commit included. The contents of the document become 
immediately searchable.



In future, for more loads, can it cater to Master Slave (Replication) and
etc to scale and perform better? If yes, we would like to go for NRT and
looking at the performance described in the article is acceptable. We were
expecting the same real time performance for a single user.



There are no changes to Master/Slave (replication) process. So any 
changes you have currently will work as before or if you enable 
replication later, it should still work as without NRT.



What about multiple users, should we wait for 1-2 secs before calling the
curl request to make SOLR perform better. Or internally it will handle with
multiple request (multithreaded and etc).


Again for updating documents, you do not have to change your current 
process or code. Everything remains the same, except that if you were 
including commit, you do not include commit in your update statements. 
There is no change to the existing update process so internally it will 
not queue or multi-thread updates. It is as in existing Solr 
functionality, there no changes to the existing setup.


Regarding perform better, in the Wiki paper  every update through curl 
adds (streams) 500 documents. So you could take this approach. (this was 
something that I chose randomly to test the performance but seems to be 
good)



What would be doc size (10,000 docs) to allow JVM perform better? Have you
done any kind of benchmarking in terms of multi threaded and multi user for
NRT and also JVM tuning in terms of SOLR sever performance. Any kind of
performance analysis would help us to decide quickly to switch over to NRT.



The performance discussed in the wiki paper uses the MBArtists index. 
The MBArtists index is the index used as one of the examples in the 
book, Solr 1.4 Enterprise Search Server. You can download and build this 
index if you have the book or can also download the contents from 
musicbrainz.org.  Each doc maybe about 100 bytes and has about 7 fields. 
Performance with wikipedia's xml dump, commenting out skipdoc field 
(include redirects) in the dataconfig.xml [ dataimport handler ], the 
update performance is about 15000 docs / sec (100 million docs), with 
the skipdoc enabled (does not skip redirects), the performance is about 
1350 docs / sec [ time spent mostly converting validating/xml  than 
actual update ] (about 11 million docs ).  Documents in wikipedia can be 
quite big, at least avg size of about 2500-5000 bytes or more.


I would suggest that you download and give NRT with Apache Solr 3.3 and 
RankingAlgorithm a try and get a feel of it as this would be the best 
way to see how your config works with it.



Questions in terms for switching over to NRT,


1.Should we upgrade to SOLR 4.x ?

2. Any benchmarking (10,000 docs/secs).  The question here is more specific

the detail of individual doc (fields, number of fields, fields size,
parameters affecting performance with faceting or w/o faceting)


Please see the MBArtists index as discussed above.



3. What about multiple users ?

A user in real time might be having an large doc size of .1 million. How to
break and analyze which one is better (though it is our task to do). But
still any kind of break up will help us. Imagine a user inbox.



You maybe able to stream the documents in a set as in the example in the 
wiki. The example streams 500 documents at a time. The wiki paper has an 
example of a document that was used. You could copy/paste that to try it 
out.



4. JVM tuning and performance result based on Multithreaded environment.

5. Machine Details (RAM, CPU, and settings from SOLR perspective).



Default Solr settings with the shipped jetty container. The startup 
script used is available when you download Solr 3.3 with 
RankingAlgorithm. It has mx set to 2Gb and uses the default collector 
with parallel collection enabled for the young generation.  The system 
is a x86_64 Linux (2.6 kernel), 2 core (2.5Ghz) and uses internal 

Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Erick Erickson
You either have to go to near real time (NRT), which is under
development, but not committed to trunk yet or just stop
warming up searchers and let the first user to open a searcher
pay the penalty for warmup, (useColdSearchers as I remember).

Although I'd also ask whether this is a reasonable requirement,
that the messages be searchable within milliseconds. Is 1 minute
really too much time? 5 minutes? You can estimate the minimum time
you can get away with by looking at the warmup times on the admin/stats
page.

Best
Erick

On Sat, Aug 13, 2011 at 9:47 PM, Naveen Gupta nkgiit...@gmail.com wrote:
 Hi,

 Most of the settings are default.

 We have single node (Memory 1 GB, Index Size 4GB)

 We have a requirement where we are doing very fast commit. This is kind of
 real time requirement where we are polling many threads from third party and
 indexes into our system.

 We want these results to be available soon.

 We are committing for each user (may have 10k threads and inside that 1
 thread may have 10 messages). So overall documents per user will be having
 around .1 million (10)

 Earlier we were using commit Within  as 10 milliseconds inside the document,
 but that was slowing the indexing and we were not getting any error.

 As we removed the commit Within, indexing became very fast. But after that
 we started experiencing in the system

 As i read many forums, everybody told that this is happening because of very
 fast commit rate, but what is the solution for our problem?

 We are using CURL to post the data and commit

 Also till now we are using default solrconfig.

 Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
 exceeded limit of maxWarmingSearchers=2, try again later.
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
        at
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
        at
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
        at
 org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
        at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
        at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
        at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
        at
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
        at
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:662)



Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Mark Miller

On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:

 You either have to go to near real time (NRT), which is under
 development, but not committed to trunk yet 

NRT support is committed to trunk.

- Mark Miller
lucidimagination.com










Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Nagendra Nagarajayya

Naveen:

You should try NRT with Apache Solr 3.3 and RankingAlgorithm. You can 
update 10,000 documents / sec while also concurrently searching. You can 
set commit  freq to about 15 mins or as desired. The 10,000 document 
update performance is with the MBArtists index on a dual core Linux 
system. So you may be able to see similar performance on your system. 
You can get more details of the NRT implementation from here:


http://solr-ra.tgels.org/wiki/en/Near_Real_Time_Search_ver_3.x

You can download Apache Solr 3.3 with RankingAlgorithm from here:

http://solr-ra.tgels.org/

(There are no changes to your existing setup, everything should work as 
earlier except for adding the realtime tag to your solrconfig.xml)


Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/13/2011 6:47 PM, Naveen Gupta wrote:

Hi,

Most of the settings are default.

We have single node (Memory 1 GB, Index Size 4GB)

We have a requirement where we are doing very fast commit. This is kind of
real time requirement where we are polling many threads from third party and
indexes into our system.

We want these results to be available soon.

We are committing for each user (may have 10k threads and inside that 1
thread may have 10 messages). So overall documents per user will be having
around .1 million (10)

Earlier we were using commit Within  as 10 milliseconds inside the document,
but that was slowing the indexing and we were not getting any error.

As we removed the commit Within, indexing became very fast. But after that
we started experiencing in the system

As i read many forums, everybody told that this is happening because of very
fast commit rate, but what is the solution for our problem?

We are using CURL to post the data and commit

Also till now we are using default solrconfig.

Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
exceeded limit of maxWarmingSearchers=2, try again later.
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
 at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
 at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
 at
org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
 at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
 at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
 at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
 at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
 at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
 at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
 at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:662)





Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Erick Erickson
Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

Erick

On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller markrmil...@gmail.com wrote:

 On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:

 You either have to go to near real time (NRT), which is under
 development, but not committed to trunk yet

 NRT support is committed to trunk.

 - Mark Miller
 lucidimagination.com











Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Naveen Gupta
Hi Mark/Erick/Nagendra,

I was not very confident about NRT at that point of time, when we started
project almost 1 year ago, definitely i would try NRT and see the
performance.

The current requirement was working fine till we were using commitWithin 10
millisecs in the XMLDocument which we were posting to SOLR.

But due to which, we were getting very poor performance (almost 3 mins for
15,000 docs) per user. There are many paraller user committing to our SOLR.

So we removed the commitWithin, and hence performance was much much better.

But then we are getting this maxWarmingSearcher Error, because we are
committing separately as a curl request after once entire doc is submitted
for indexing.

The question here is what is difference between commitWithin and commit
(apart from the fact that commit takes memory and processes and additional
hardware usage)

Why we want it to be visible as soon as possible, since we are applying many
business rules on top of the results (older indexes as well as new one) and
apply different filters.

upto 5 mins is fine for us. but more than that we need to think then other
optimizations.

We will definitely try NRT. But please tell me other options which we can
apply in order to optimize.?

Thanks
Naveen


On Sun, Aug 14, 2011 at 9:42 PM, Erick Erickson erickerick...@gmail.comwrote:

 Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

 Erick

 On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller markrmil...@gmail.com
 wrote:
 
  On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
 
  You either have to go to near real time (NRT), which is under
  development, but not committed to trunk yet
 
  NRT support is committed to trunk.
 
  - Mark Miller
  lucidimagination.com
 
 
 
 
 
 
 
 
 



Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Mark Miller
It's somewhat confusing - I'll straighten it out though. I left the issue open 
to keep me from taking forever to doc it - hasn't helped much yet - but maybe 
later today...

On Aug 14, 2011, at 12:12 PM, Erick Erickson wrote:

 Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
 
 Erick
 
 On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller markrmil...@gmail.com wrote:
 
 On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
 
 You either have to go to near real time (NRT), which is under
 development, but not committed to trunk yet
 
 NRT support is committed to trunk.
 
 - Mark Miller
 lucidimagination.com
 
 
 
 
 
 
 
 
 

- Mark Miller
lucidimagination.com










Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Peter Sturge
It's worth noting that the fast commit rate is only an indirect part
of the issue you're seeing. As the error comes from cache warming - a
consequence of committing, it's not the fault of commiting directly.
It's well worth having a good close look at exactly what you're caches
are doing when they are warmed, and trying as much as possible to
remove any uneeded facet/field caching etc.
The time it takes to repopulate the caches causes the error - if it's
slower than the commit rate, you'll get into the 'try again later'
spiral.

There are a number of ways to help mitigate this - NRT is the
certainly the [hopefullly near] future for this. Other strategies
include distributed search/cloud/ZK - splitting the index into logical
shards, so your commits and their associated caches are smaller and
more targeted. You can also use two Solr instances - one optimized for
writes/commits, one for reads, (write commits are async of the 'read'
instance), plus there are customized solutions like RankingAlgorithm,
Zoie etc.


On Sun, Aug 14, 2011 at 2:47 AM, Naveen Gupta nkgiit...@gmail.com wrote:
 Hi,

 Most of the settings are default.

 We have single node (Memory 1 GB, Index Size 4GB)

 We have a requirement where we are doing very fast commit. This is kind of
 real time requirement where we are polling many threads from third party and
 indexes into our system.

 We want these results to be available soon.

 We are committing for each user (may have 10k threads and inside that 1
 thread may have 10 messages). So overall documents per user will be having
 around .1 million (10)

 Earlier we were using commit Within  as 10 milliseconds inside the document,
 but that was slowing the indexing and we were not getting any error.

 As we removed the commit Within, indexing became very fast. But after that
 we started experiencing in the system

 As i read many forums, everybody told that this is happening because of very
 fast commit rate, but what is the solution for our problem?

 We are using CURL to post the data and commit

 Also till now we are using default solrconfig.

 Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
 exceeded limit of maxWarmingSearchers=2, try again later.
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
        at
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
        at
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
        at
 org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
        at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
        at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
        at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
        at
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
        at
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:662)



Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Nagendra Nagarajayya

Naveen:

NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a 
document to become searchable. Any document that you add through update 
becomes  immediately searchable. So no need to commit from within your 
update client code.  Since there is no commit, the cache does not have 
to be cleared or the old searchers closed or  new searchers opened, and 
warmed (error that you are facing).


Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 10:37 AM, Naveen Gupta wrote:

Hi Mark/Erick/Nagendra,

I was not very confident about NRT at that point of time, when we started
project almost 1 year ago, definitely i would try NRT and see the
performance.

The current requirement was working fine till we were using commitWithin 10
millisecs in the XMLDocument which we were posting to SOLR.

But due to which, we were getting very poor performance (almost 3 mins for
15,000 docs) per user. There are many paraller user committing to our SOLR.

So we removed the commitWithin, and hence performance was much much better.

But then we are getting this maxWarmingSearcher Error, because we are
committing separately as a curl request after once entire doc is submitted
for indexing.

The question here is what is difference between commitWithin and commit
(apart from the fact that commit takes memory and processes and additional
hardware usage)

Why we want it to be visible as soon as possible, since we are applying many
business rules on top of the results (older indexes as well as new one) and
apply different filters.

upto 5 mins is fine for us. but more than that we need to think then other
optimizations.

We will definitely try NRT. But please tell me other options which we can
apply in order to optimize.?

Thanks
Naveen


On Sun, Aug 14, 2011 at 9:42 PM, Erick Ericksonerickerick...@gmail.comwrote:


Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

Erick

On Sun, Aug 14, 2011 at 10:02 AM, Mark Millermarkrmil...@gmail.com
wrote:

On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:


You either have to go to near real time (NRT), which is under
development, but not committed to trunk yet

NRT support is committed to trunk.

- Mark Miller
lucidimagination.com













Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Bill Bell
OK,

I'll ask the elephant in the roomŠ.

What is the difference between the new UpdateHandler from Mark and the
SOLR-RA?

The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?

Pros/Cons?


On 8/14/11 8:10 PM, Nagendra Nagarajayya nnagaraja...@transaxtions.com
wrote:

Naveen:

NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable. Any document that you add through update
becomes  immediately searchable. So no need to commit from within your
update client code.  Since there is no commit, the cache does not have
to be cleared or the old searchers closed or  new searchers opened, and
warmed (error that you are facing).

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 10:37 AM, Naveen Gupta wrote:
 Hi Mark/Erick/Nagendra,

 I was not very confident about NRT at that point of time, when we
started
 project almost 1 year ago, definitely i would try NRT and see the
 performance.

 The current requirement was working fine till we were using
commitWithin 10
 millisecs in the XMLDocument which we were posting to SOLR.

 But due to which, we were getting very poor performance (almost 3 mins
for
 15,000 docs) per user. There are many paraller user committing to our
SOLR.

 So we removed the commitWithin, and hence performance was much much
better.

 But then we are getting this maxWarmingSearcher Error, because we are
 committing separately as a curl request after once entire doc is
submitted
 for indexing.

 The question here is what is difference between commitWithin and commit
 (apart from the fact that commit takes memory and processes and
additional
 hardware usage)

 Why we want it to be visible as soon as possible, since we are applying
many
 business rules on top of the results (older indexes as well as new one)
and
 apply different filters.

 upto 5 mins is fine for us. but more than that we need to think then
other
 optimizations.

 We will definitely try NRT. But please tell me other options which we
can
 apply in order to optimize.?

 Thanks
 Naveen


 On Sun, Aug 14, 2011 at 9:42 PM, Erick
Ericksonerickerick...@gmail.comwrote:

 Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

 Erick

 On Sun, Aug 14, 2011 at 10:02 AM, Mark Millermarkrmil...@gmail.com
 wrote:
 On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:

 You either have to go to near real time (NRT), which is under
 development, but not committed to trunk yet
 NRT support is committed to trunk.

 - Mark Miller
 lucidimagination.com














Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Nagendra Nagarajayya

Bill:

The technical details of the NRT implementation in Apache Solr with 
RankingAlgorithm (SOLR-RA) is available here:


http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf

(Some changes for Solr 3.x, but for most it is as above)

Regarding support for 4.0 trunk, should happen sometime soon.

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org





On 8/14/2011 7:11 PM, Bill Bell wrote:

OK,

I'll ask the elephant in the roomŠ.

What is the difference between the new UpdateHandler from Mark and the
SOLR-RA?

The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?

Pros/Cons?


On 8/14/11 8:10 PM, Nagendra Nagarajayyannagaraja...@transaxtions.com
wrote:


Naveen:

NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable. Any document that you add through update
becomes  immediately searchable. So no need to commit from within your
update client code.  Since there is no commit, the cache does not have
to be cleared or the old searchers closed or  new searchers opened, and
warmed (error that you are facing).

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 10:37 AM, Naveen Gupta wrote:

Hi Mark/Erick/Nagendra,

I was not very confident about NRT at that point of time, when we
started
project almost 1 year ago, definitely i would try NRT and see the
performance.

The current requirement was working fine till we were using
commitWithin 10
millisecs in the XMLDocument which we were posting to SOLR.

But due to which, we were getting very poor performance (almost 3 mins
for
15,000 docs) per user. There are many paraller user committing to our
SOLR.

So we removed the commitWithin, and hence performance was much much
better.

But then we are getting this maxWarmingSearcher Error, because we are
committing separately as a curl request after once entire doc is
submitted
for indexing.

The question here is what is difference between commitWithin and commit
(apart from the fact that commit takes memory and processes and
additional
hardware usage)

Why we want it to be visible as soon as possible, since we are applying
many
business rules on top of the results (older indexes as well as new one)
and
apply different filters.

upto 5 mins is fine for us. but more than that we need to think then
other
optimizations.

We will definitely try NRT. But please tell me other options which we
can
apply in order to optimize.?

Thanks
Naveen


On Sun, Aug 14, 2011 at 9:42 PM, Erick
Ericksonerickerick...@gmail.comwrote:


Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

Erick

On Sun, Aug 14, 2011 at 10:02 AM, Mark Millermarkrmil...@gmail.com
wrote:

On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:


You either have to go to near real time (NRT), which is under
development, but not committed to trunk yet

NRT support is committed to trunk.

- Mark Miller
lucidimagination.com

















Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Bill Bell
I understand.

Have you looked at Mark's patch? From his performance tests, it looks
pretty good.

When would RA work better?

Bill


On 8/14/11 8:40 PM, Nagendra Nagarajayya nnagaraja...@transaxtions.com
wrote:

Bill:

The technical details of the NRT implementation in Apache Solr with
RankingAlgorithm (SOLR-RA) is available here:

http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf

(Some changes for Solr 3.x, but for most it is as above)

Regarding support for 4.0 trunk, should happen sometime soon.

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org





On 8/14/2011 7:11 PM, Bill Bell wrote:
 OK,

 I'll ask the elephant in the roomŠ.

 What is the difference between the new UpdateHandler from Mark and the
 SOLR-RA?

 The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?

 Pros/Cons?


 On 8/14/11 8:10 PM, Nagendra
Nagarajayyannagaraja...@transaxtions.com
 wrote:

 Naveen:

 NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
 document to become searchable. Any document that you add through update
 becomes  immediately searchable. So no need to commit from within your
 update client code.  Since there is no commit, the cache does not have
 to be cleared or the old searchers closed or  new searchers opened, and
 warmed (error that you are facing).

 Regards

 - Nagendra Nagarajayya
 http://solr-ra.tgels.org
 http://rankingalgorithm.tgels.org



 On 8/14/2011 10:37 AM, Naveen Gupta wrote:
 Hi Mark/Erick/Nagendra,

 I was not very confident about NRT at that point of time, when we
 started
 project almost 1 year ago, definitely i would try NRT and see the
 performance.

 The current requirement was working fine till we were using
 commitWithin 10
 millisecs in the XMLDocument which we were posting to SOLR.

 But due to which, we were getting very poor performance (almost 3 mins
 for
 15,000 docs) per user. There are many paraller user committing to our
 SOLR.

 So we removed the commitWithin, and hence performance was much much
 better.

 But then we are getting this maxWarmingSearcher Error, because we are
 committing separately as a curl request after once entire doc is
 submitted
 for indexing.

 The question here is what is difference between commitWithin and
commit
 (apart from the fact that commit takes memory and processes and
 additional
 hardware usage)

 Why we want it to be visible as soon as possible, since we are
applying
 many
 business rules on top of the results (older indexes as well as new
one)
 and
 apply different filters.

 upto 5 mins is fine for us. but more than that we need to think then
 other
 optimizations.

 We will definitely try NRT. But please tell me other options which we
 can
 apply in order to optimize.?

 Thanks
 Naveen


 On Sun, Aug 14, 2011 at 9:42 PM, Erick
 Ericksonerickerick...@gmail.comwrote:

 Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

 Erick

 On Sun, Aug 14, 2011 at 10:02 AM, Mark Millermarkrmil...@gmail.com
 wrote:
 On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:

 You either have to go to near real time (NRT), which is under
 development, but not committed to trunk yet
 NRT support is committed to trunk.

 - Mark Miller
 lucidimagination.com

















Re: exceeded limit of maxWarmingSearchers ERROR

2011-08-14 Thread Nagendra Nagarajayya

Bill:

I did look at Marks performance tests. Looks very interesting.

Here is the Apacle Solr 3.3 with RankingAlgorithm NRT performance:
http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 7:47 PM, Bill Bell wrote:

I understand.

Have you looked at Mark's patch? From his performance tests, it looks
pretty good.

When would RA work better?

Bill


On 8/14/11 8:40 PM, Nagendra Nagarajayyannagaraja...@transaxtions.com
wrote:


Bill:

The technical details of the NRT implementation in Apache Solr with
RankingAlgorithm (SOLR-RA) is available here:

http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf

(Some changes for Solr 3.x, but for most it is as above)

Regarding support for 4.0 trunk, should happen sometime soon.

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org





On 8/14/2011 7:11 PM, Bill Bell wrote:

OK,

I'll ask the elephant in the roomŠ.

What is the difference between the new UpdateHandler from Mark and the
SOLR-RA?

The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?

Pros/Cons?


On 8/14/11 8:10 PM, Nagendra
Nagarajayyannagaraja...@transaxtions.com
wrote:


Naveen:

NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable. Any document that you add through update
becomes  immediately searchable. So no need to commit from within your
update client code.  Since there is no commit, the cache does not have
to be cleared or the old searchers closed or  new searchers opened, and
warmed (error that you are facing).

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 10:37 AM, Naveen Gupta wrote:

Hi Mark/Erick/Nagendra,

I was not very confident about NRT at that point of time, when we
started
project almost 1 year ago, definitely i would try NRT and see the
performance.

The current requirement was working fine till we were using
commitWithin 10
millisecs in the XMLDocument which we were posting to SOLR.

But due to which, we were getting very poor performance (almost 3 mins
for
15,000 docs) per user. There are many paraller user committing to our
SOLR.

So we removed the commitWithin, and hence performance was much much
better.

But then we are getting this maxWarmingSearcher Error, because we are
committing separately as a curl request after once entire doc is
submitted
for indexing.

The question here is what is difference between commitWithin and
commit
(apart from the fact that commit takes memory and processes and
additional
hardware usage)

Why we want it to be visible as soon as possible, since we are
applying
many
business rules on top of the results (older indexes as well as new
one)
and
apply different filters.

upto 5 mins is fine for us. but more than that we need to think then
other
optimizations.

We will definitely try NRT. But please tell me other options which we
can
apply in order to optimize.?

Thanks
Naveen


On Sun, Aug 14, 2011 at 9:42 PM, Erick
Ericksonerickerick...@gmail.comwrote:


Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

Erick

On Sun, Aug 14, 2011 at 10:02 AM, Mark Millermarkrmil...@gmail.com
wrote:

On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:


You either have to go to near real time (NRT), which is under
development, but not committed to trunk yet

NRT support is committed to trunk.

- Mark Miller
lucidimagination.com




















exceeded limit of maxWarmingSearchers ERROR

2011-08-13 Thread Naveen Gupta
Hi,

Most of the settings are default.

We have single node (Memory 1 GB, Index Size 4GB)

We have a requirement where we are doing very fast commit. This is kind of
real time requirement where we are polling many threads from third party and
indexes into our system.

We want these results to be available soon.

We are committing for each user (may have 10k threads and inside that 1
thread may have 10 messages). So overall documents per user will be having
around .1 million (10)

Earlier we were using commit Within  as 10 milliseconds inside the document,
but that was slowing the indexing and we were not getting any error.

As we removed the commit Within, indexing became very fast. But after that
we started experiencing in the system

As i read many forums, everybody told that this is happening because of very
fast commit rate, but what is the solution for our problem?

We are using CURL to post the data and commit

Also till now we are using default solrconfig.

Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
exceeded limit of maxWarmingSearchers=2, try again later.
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
at
org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)