SolrIndexSearcher accumulation

2017-04-07 Thread Gerald Reinhart


Hi,

   We have some custom code that extends SearchHandler to be able to :
- do an extra request
- merge/combine the original request and the extra request results

   On Solr 5.x, our code was working very well, now with Solr 6.x we
have the following issue:  the number of SolrIndexSearcher are
increasing (we can see them in the admin view > Plugins/ Stats > Core ).
As SolrIndexSearcher are accumulating, we have the following issues :
   - the memory used by Solr is increasing => OOM after a long
period of time in production
   - some files in the index has been deleted from the system but
the Solr JVM still hold them => ("fake") Full disk after a long period
of time in production

   We are wondering,
  - what has changed between Solr 5.x and Solr 6.x in the
management of the SolrIndexSearcher ?
  - what would be the best way, in a Solr plugin, to perform 2
queries and merge the results to a single SolrQueryResponse ?

   Thanks a lot.

Gérald, Elodie, Ludo and André



Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: [Migration Solr5 to Solr6] Unwanted deleted files references

2017-03-14 Thread Gerald Reinhart


Hi,

   The custom code we have is something like this :

public class MySearchHandlerextends SearchHandler {


@Override public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse 
rsp)throws Exception {

SolrIndexSearcher  searcher =req.getSearcher();

try{

 // Do stuff with the searcher


}finally {
req.close();
}



}

}

 Despite the fact that we always close the request each time we get
a SolrIndexSearcher from the request, the number of SolrIndexSearcher
instances is increasing. Each time a new commit is done on the index, a
new Searcher is created (this is normal) but the old one remains. Is
there something wrong with this custom code ? Shall we try something
explained there :
http://stackoverflow.com/questions/20515493/solr-huge-number-of-open-searchers
? Thanks, Gérald Reinhart (working with Elodie on the subject) On
03/07/2017 05:45 PM, Elodie Sannier wrote:

Thank you Erick for your answer.

The files are deleted even without JVM restart but they are still seen
as DELETED by the kernel.

We have a custom code and for the migration to Solr 6.4.0 we have added
a new code with req.getSearcher() but without "close".
We will decrement the reference count on a resource for the Searcher
(prevent the Searcher remains open after a commit) and see if it fixes
the problem.

Elodie

On 03/07/2017 03:55 PM, Erick Erickson wrote:

Just as a sanity check, if you restart the Solr JVM, do the files
disappear from disk?

Do you have any custom code anywhere in this chain? If so, do you open
any searchers but
fail to close them? Although why 6.4 would manifest the problem but
other code wouldn't
is a mystery, just another sanity check.

Best,
Erick

On Tue, Mar 7, 2017 at 6:44 AM, Elodie Sannier  wrote:

Hello,

We have migrated from Solr 5.4.1 to Solr 6.4.0 and the disk usage has
increased.
We found hundreds of references to deleted index files being held by solr.
Before the migration, we had 15-30% of disk space used, after the migration
we have 60-90% of disk space used.

We are using Solr Cloud with 2 collections.

The commands applied on the collections are:
- for incremental indexation mode: add, deleteById with commitWithin of 30
minutes
- for full indexation mode: add, deleteById, commit
- for switch between incremental and full mode: deleteByQuery, createAlias,
reload
- there is also an autocommit every 15 minutes

We have seen the email "Solr leaking references to deleted files"
2016-05-31 which describe the same problem but the mentioned bugs are fixed.

We manually tried to force a commit, a reload and an optimize on the
collections without effect.

Is a problem of configuration (merge / delete policy) or a possible
regression in the Solr code ?

Thank you


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce
message, merci de le détruire et d'en avertir l'expéditeur.


--

Elodie Sannier
Software engineer



*E*elodie.sann...@kelkoo.fr*Skype*kelkooelodies
*T*+33 (0)4 56 09 07 55
*A*Parc Sud Galaxie, 6, rue des Méridiens, 38130 Echirolles


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.

--

Gérald ReinhartSoftware Engineer



*E*gerald.reinh...@kelkoo.com*A*Parc Sud Galaxie, 6, rue des Méridiens,
38130 Echirolles, FR


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: [Benchmark SOLR] JETTY VS TOMCAT - Jetty 15% slower - need advice to improve Jetty performance

2017-02-01 Thread Gerald Reinhart



We have done some profiling with Visualvm, but nothing obvious appeared.

Thank Rick for the advice.

Gérald Reinhart


On 02/01/2017 11:17 AM, Rick Leir wrote:

There is a profiling tool in Eclipse that can show you a tree of method
calls, with timing information. I have found this useful in the past to
investigate a performance problem. But it might not help if the problem
only occurs at 165 queries per second (is that true?).

cheers -- Rick


On 2017-01-30 04:02 AM, Gerald Reinhart wrote:

Hello,

 In addition to the following settings, we have tried to :
 - force Jetty to use more threads
 - put the same GC options as our Tomcat
 - change nb of acceptors and selectors

and every time Jetty is slower than Tomcat.

Any advice is welcome

Thanks,

Gérald Reinhart




On 01/27/2017 11:22 AM, Gerald Reinhart wrote:

Hello,

  We are migrating our platform
  from
   - Solr 5.4.1 hosted by a Tomcat
  to
   - Solr 5.4.1 standalone (hosted by Jetty)

=> Jetty is 15% slower than Tomcat in the same conditions.


 Here are details about the benchmarks :

 Context :
  - Index with 9 000 000 documents
  - Gatling launch queries extracted from the real traffic
  - Server :  R410 with 16 virtual CPU and 96G mem

 Results with 20 clients in // during 10 minutes:
  For Tomcat :
  - 165 Queries per seconds
  - 120ms mean response time

  For Jetty :
  - 139 Queries per seconds
  - 142ms mean response time

We have checked :
 - the load of the server => same
 - the io wait => same
 - the memory used in the JVM => same
 - JVM GC settings => same

For us, it's a blocker for the migration.

Is it a known issue ? (I found that :
http://www.asjava.com/jetty/jetty-vs-tomcat-performance-comparison/)

How can we improve the performance of Jetty ? (We have already
followed
http://www.eclipse.org/jetty/documentation/9.2.21.v20170120/optimizing.html

recommendation)

   Many thanks,


Gérald Reinhart


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à
l'attention exclusive de leurs destinataires. Si vous n'êtes pas le
destinataire de ce message, merci de le détruire et d'en avertir
l'expéditeur.


--

Gérald Reinhart
Software Engineer

<http://www.kelkoo.com/>

*E*gerald.reinh...@kelkoo.com
*A*Parc Sud Galaxie, 6, rue des Méridiens, 38130 Echirolles, FR


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à
l'attention exclusive de leurs destinataires. Si vous n'êtes pas le
destinataire de ce message, merci de le détruire et d'en avertir
l'expéditeur.



--

Gérald Reinhart
Software Engineer

<http://www.kelkoo.com/>

*E*gerald.reinh...@kelkoo.com
*A*Parc Sud Galaxie, 6, rue des Méridiens, 38130 Echirolles, FR


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


[Benchmark SOLR] JETTY VS TOMCAT - Jetty 15% slower - need advice to improve Jetty performance

2017-01-30 Thread Gerald Reinhart


Hello,

In addition to the following settings, we have tried to :
- force Jetty to use more threads
- put the same GC options as our Tomcat
- change nb of acceptors and selectors

   and every time Jetty is slower than Tomcat.

   Any advice is welcome

Thanks,

Gérald Reinhart




On 01/27/2017 11:22 AM, Gerald Reinhart wrote:

Hello,

 We are migrating our platform
 from
  - Solr 5.4.1 hosted by a Tomcat
 to
  - Solr 5.4.1 standalone (hosted by Jetty)

=> Jetty is 15% slower than Tomcat in the same conditions.


Here are details about the benchmarks :

Context :
 - Index with 9 000 000 documents
 - Gatling launch queries extracted from the real traffic
 - Server :  R410 with 16 virtual CPU and 96G mem

Results with 20 clients in // during 10 minutes:
 For Tomcat :
 - 165 Queries per seconds
 - 120ms mean response time

 For Jetty :
 - 139 Queries per seconds
 - 142ms mean response time

We have checked :
- the load of the server => same
- the io wait => same
- the memory used in the JVM => same
- JVM GC settings => same

   For us, it's a blocker for the migration.

   Is it a known issue ? (I found that :
http://www.asjava.com/jetty/jetty-vs-tomcat-performance-comparison/)

   How can we improve the performance of Jetty ? (We have already
followed
http://www.eclipse.org/jetty/documentation/9.2.21.v20170120/optimizing.html
recommendation)

  Many thanks,


Gérald Reinhart


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.



--

Gérald Reinhart
Software Engineer

<http://www.kelkoo.com/>

*E*gerald.reinh...@kelkoo.com
*A*Parc Sud Galaxie, 6, rue des Méridiens, 38130 Echirolles, FR


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


[Benchmark SOLR] JETTY VS TOMCAT

2017-01-27 Thread Gerald Reinhart

Hello,

   We are migrating our platform
   from
- Solr 5.4.1 hosted by a Tomcat
   to
- Solr 5.4.1 standalone (hosted by Jetty)

=> Jetty is 15% slower than Tomcat in the same conditions.


  Here are details about the benchmarks :

  Context :
   - Index with 9 000 000 documents
   - Gatling launch queries extracted from the real traffic
   - Server :  R410 with 16 virtual CPU and 96G mem

  Results with 20 clients in // during 10 minutes:
   For Tomcat :
   - 165 Queries per seconds
   - 120ms mean response time

   For Jetty :
   - 139 Queries per seconds
   - 142ms mean response time

We have checked :
  - the load of the server => same
  - the io wait => same
  - the memory used in the JVM => same
  - JVM GC settings => same

 For us, it's a blocker for the migration.

 Is it a known issue ? (I found that :
http://www.asjava.com/jetty/jetty-vs-tomcat-performance-comparison/)

 How can we improve the performance of Jetty ? (We have already
followed
http://www.eclipse.org/jetty/documentation/9.2.21.v20170120/optimizing.html
recommendation)

Many thanks,


Gérald Reinhart


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: [Solr-5-4-1] Why SolrCloud leader is putting all replicas in recovery at the same time ?

2016-10-13 Thread Gerald Reinhart


Hi Pushkar Raste,

  Thanks for your hits.
  We will try the 3rd solution and keep you posted.

Gérald Reinhart

On 10/07/2016 02:23 AM, Pushkar Raste wrote:
A couple of questions/suggestions
- This normally happens after leader election, when new leader gets elected, it 
will force all the nodes to sync with itself.
Check logs to see when this happens, if leader was changed. If that is true 
then you will have to investigate why leader change takes place.
I suspect leader goes into long enough GC pause that makes zookeeper leader is 
no longer available and initiates leader election.

- What version of Solr you are using.  
SOLR-8586<https://issues.apache.org/jira/browse/SOLR-8586> introduced 
IndexFingerprint check, unfortunately it was broken and hence replica would always do full 
index replication. Issue is now fixed in 
SOLR-9310<https://issues.apache.org/jira/browse/SOLR-9310>, this should help replicas 
recover faster.

- You should also increase ulog log size (default threshold is 100 docs or 10 
tlogs whichever is hit first). This will again help replicas recover faster 
from tlogs (of course, there would be a threshold after which recovering from 
tlog would in fact take longer than copying over all the index files from 
leader)


On Thu, Oct 6, 2016 at 5:23 AM, Gerald Reinhart 
<gerald.reinh...@kelkoo.com<mailto:gerald.reinh...@kelkoo.com>> wrote:

Hello everyone,

   Our Solr Cloud  works very well for several months without any significant 
changes: the traffic to serve is stable, no major release deployed...

   But randomly, the Solr Cloud leader puts all the replicas in recovery at the 
same time for no obvious reason.

   Hence, we can not serve the queries any more and the leader is overloaded 
while replicating all the indexes on the replicas at the same time which 
eventually implies a downtime of approximately 30 minutes.

   Is there a way to prevent it ? Ideally, a configuration saying a percentage 
of replicas to be put in recovery at the same time?

Thanks,

Gérald, Elodie and Ludovic


--
[cid:part1.0508.06030105@kelkoo.com]

Gérald Reinhart Software Engineer

E <mailto:gerald.reinh...@kelkoo.com> <mailto:gerald.reinh...@kelkoo.com> 
gerald.reinh...@kelkoo.com<mailto:gerald.reinh...@kelkoo.com>Y!Messenger gerald.reinhart
T +33 (0)4 56 09 07 41<tel:%2B33%20%280%294%2056%2009%2007%2041>
A Parc Sud Galaxie - Le Calypso, 6 rue des Méridiens, 38130 Echirolles


[cid:part4.08030706.00010405@kelkoo.com]




Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.



--


--
[cid:part9.0705.09050102@kelkoo.com]

Gérald Reinhart Software Engineer

E <mailto:gerald.reinh...@kelkoo.com> <mailto:gerald.reinh...@kelkoo.com> 
gerald.reinh...@kelkoo.com<mailto:gerald.reinh...@kelkoo.com>Y!Messenger gerald.reinhart
T +33 (0)4 56 09 07 41
A Parc Sud Galaxie - Le Calypso, 6 rue des Méridiens, 38130 Echirolles


[cid:part12.05000204.08070006@kelkoo.com]




Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


[Solr-5-4-1] Why SolrCloud leader is putting all replicas in recovery at the same time ?

2016-10-06 Thread Gerald Reinhart


Hello everyone,

   Our Solr Cloud  works very well for several months without any significant 
changes: the traffic to serve is stable, no major release deployed...

   But randomly, the Solr Cloud leader puts all the replicas in recovery at the 
same time for no obvious reason.

   Hence, we can not serve the queries any more and the leader is overloaded 
while replicating all the indexes on the replicas at the same time which 
eventually implies a downtime of approximately 30 minutes.

   Is there a way to prevent it ? Ideally, a configuration saying a percentage 
of replicas to be put in recovery at the same time?

Thanks,

Gérald, Elodie and Ludovic


--
[cid:part1.0508.06030105@kelkoo.com]

Gérald Reinhart Software Engineer

E   
gerald.reinh...@kelkoo.comY!Messenger gerald.reinhart
T +33 (0)4 56 09 07 41
A Parc Sud Galaxie - Le Calypso, 6 rue des Méridiens, 38130 Echirolles


[cid:part4.08030706.00010405@kelkoo.com]




Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: [Migration Solr4 to Solr5] Collection reload error

2016-03-07 Thread Gerald Reinhart


Hi,

 To give you some context, we are migrating from Solr4 and solr5,
the client code and the configuration haven't changed but now we are
facing this problem. We have already checked the commit behaviour
configuration and it seems good.

Here it is :

Server side, we have 2 collections (main and temp with blue and green
aliases) :

   solrconfig.xml:

  
  
 (...)
 
   90
   false
 

 

  

Client side, we have 2 different modes:

1 - Full recovery :

- Delete all documents from the temp collection
  solrClient.deleteByQuery("*:*")

- Add all new documents in temp collection (can be more
than 5Millions),
  solrClient.add(doc, -1) // commitWithinMs == -1

-  Commit when all documents are added
  solrClient.commit(false,false) // waitFlush == false ,
waitSearcher == false

-  Swap blue and green using "create alias" command

-  Reload the temp collection to clean the cache. This is
at this point we have the issue.

2 - Incremental :

-  Add or delete documents from the main collection
   solrClient.add(doc, 180)   // commitWithin
== 30 mn
   solrClient.deleteById(doc, 180) // commitWithin == 30 mn

Maybe you will spot something obviously wrong ?

Thanks

Gérald and Elodie



On 03/04/2016 12:41 PM, Dmitry Kan wrote:

Hi,

Check the the autoCommit and autoSoftCommit nodes in the solrconfig.xml.
Set them to reasonable values. The idea is that if you commit too often,
searchers will be warmed up and thrown away. If at any point in time you
get overlapping commits, there will be several searchers sitting on the
deck.

Dmitry

On Mon, Feb 29, 2016 at 4:20 PM, Gerald Reinhart <gerald.reinh...@kelkoo.com

wrote:
Hi,

We are facing an issue during a migration from Solr4 to Solr5.

Given
- migration from solr 4.10.4 to 5.4.1
- 2 collections
- cloud with one leader and several replicas
- in solrconfig.xml: maxWarmingSearchers=1
- no code change

When collection reload using /admin/collections using solrj

Then

2016-02-29 13:42:49,011 [http-8080-3] INFO
org.apache.solr.core.CoreContainer:reload:848  - Reloading SolrCore
'fr_blue' using configuration from collection fr_blue
2016-02-29 13:42:45,428 [http-8080-6] INFO
org.apache.solr.search.SolrIndexSearcher::237  - Opening
Searcher@58b65fc[fr_blue] main
(...)
2016-02-29 13:42:49,077 [http-8080-3] WARN
org.apache.solr.core.SolrCore:getSearcher:1762  - [fr_blue] Error
opening new searcher. exceeded limit of maxWarmingSearchers=1, try again
later.
2016-02-29 13:42:49,091 [http-8080-3] ERROR
org.apache.solr.handler.RequestHandlerBase:log:139  -
org.apache.solr.common.SolrException: Error handling 'reload' action
 at

org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:770)
 at

org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:230)
 at

org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:184)
 at

org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
 at

org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:438)
 at

org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
 at

org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
 at

org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at

org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at

org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at

org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at

org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at

org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
 at

org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
 at

org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to reload core
[fr_blue]
 at
org.apache.solr.core.CoreContainer.reload

[ISSUE] backup on a recovering index should fail

2016-03-01 Thread Gerald Reinhart


Hi,

   In short: backup on a recovering index should fail.

   We are using the backup command "http:// ...
/replication?command=backup=/tmp" against one server of the
cluster.
   Most of the time there is no issue with this command.

   But in some particular case, the server can be in recovery mode. In
this case, the command is doing a backup on an index that is not
complete and return a http code 200. We end up with a partial index
backup ! As a workaround we will do this backup against the leader of
the cloud: the leader is never in recovery mode.

   In our opinion, the backup command on a recovering index should
return a http code 503 Service Unavailable (and not http code 200 OK).

   Shall we open a issue or it's an expected behaviour ?

   Thanks,


Gérald and Elodie


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


[Migration Solr4 to Solr5] Collection reload error

2016-02-29 Thread Gerald Reinhart

Hi,

   We are facing an issue during a migration from Solr4 to Solr5.

Given
   - migration from solr 4.10.4 to 5.4.1
   - 2 collections
   - cloud with one leader and several replicas
   - in solrconfig.xml: maxWarmingSearchers=1
   - no code change

When collection reload using /admin/collections using solrj

Then

2016-02-29 13:42:49,011 [http-8080-3] INFO
org.apache.solr.core.CoreContainer:reload:848  - Reloading SolrCore
'fr_blue' using configuration from collection fr_blue
2016-02-29 13:42:45,428 [http-8080-6] INFO
org.apache.solr.search.SolrIndexSearcher::237  - Opening
Searcher@58b65fc[fr_blue] main
(...)
2016-02-29 13:42:49,077 [http-8080-3] WARN
org.apache.solr.core.SolrCore:getSearcher:1762  - [fr_blue] Error
opening new searcher. exceeded limit of maxWarmingSearchers=1, try again
later.
2016-02-29 13:42:49,091 [http-8080-3] ERROR
org.apache.solr.handler.RequestHandlerBase:log:139  -
org.apache.solr.common.SolrException: Error handling 'reload' action
at
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:770)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:230)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:184)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:438)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to reload core
[fr_blue]
at
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:854)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:768)
... 20 more
Caused by: org.apache.solr.common.SolrException: Error opening new
searcher. exceeded limit of maxWarmingSearchers=1, try again later.
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1764)
at org.apache.solr.core.SolrCore.reload(SolrCore.java:474)
at
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:849)
... 21 more


Thanks


Gérald and Elodie


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: Update to solr 5 - custom phrase query implementation issue

2016-02-03 Thread Gerald Reinhart

On 02/02/2016 03:20 PM, Erik Hatcher wrote:

On Feb 2, 2016, at 8:57 AM, Elodie Sannier  wrote:

Hello,

We are using solr 4.10.4 and we want to update to 5.4.1.

With solr 4.10.4:
- we extend PhraseQuery with a custom class in order to remove some
terms from phrase queries with phrase slop (update of add(Term term, int
position) method)
- in order to use our implementation, we extend ExtendedSolrQueryParser
with a custom class and we override the method newPhraseQuery but with
solr 5 this method does not exist anymore

How can we do this with solr 5.4.1 ?


You’ll want to override this method, it looks like:

protected Query getFieldQuery(String field, String queryText, int slop)



Hi Erik,

To change the behavior of the PhraseQuery either:
   - we change it after the natural cycle. The PhraseQuery is supposed
to be immutable and the setSlop(int s) is deprecated... we don't really
want to do this.
   - we override the code that actually build it :
org.apache.solr.search.ExtendedDismaxQParser.getFieldQuery(String field,
String val, int slop) PROTECTED
  use getAliasedQuery() PROTECTED
use getQuery() PRIVATE which use new PhraseQuery.Builder()
to create the query...

so not easy to override the behavior: we would need to
overrride/duplicate getAliasedQuery() and getQuery() methods. we don't
really want to do this either.

So we don't really know where to go.

Thanks,


Gerald (I'm working with Elodie on the subject)



Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: Update to solr 5 - custom phrase query implementation issue

2016-02-03 Thread Gerald Reinhart

Erik, here is some context :

   - migration from solr 4.10.4 to 5.4.1.
   - we have our own synonym implementation that do not use solr
synonym mechanism: at the time, we needed to manage multi token synonym
and it wasn't covered by the Lucene features. So basically we
- let' say that "playstation portable" is a synonym of
"psp". We identify "psp" and "playstation portable" as .
- at index time, on every document with "psp" we replace it
by "psp "
- at query-time:
   - By extending SearchHandler, we replace the query
"playstation portable" by  "((playstation AND portable) OR )".
   - (1) By extending ExtendedSolrQueryParser, we do
not add synonym ids in PhraseQuery. We need advice for the migration to
Solr 5.4.1
   - By extending BooleanQuery, we adjust the
coordination factor. We need advice for the migration to Solr 5.4.1 (see
other question in the mailing list)

   Hope it's clearer

Thanks

Gérald


(1)
public class MyPhraseQuery extends PhraseQuery {
   private int deltaPosition = 0;

   @Override
   public void add(Term term, int position) {
  String termText = term.text();
  if (!termText.matches(Constants.SYNONYM_PIVOT_ID_REGEX)) {
 super.add(term, position - deltaPosition);
  }else{
 deltaPosition++;
  }
   }
}


On 02/03/2016 03:05 PM, Erik Hatcher wrote:

Gerald - I don’t quite understand, sorry - perhaps best if you could post your 
code (or some test version you can work with and share here) so we can see what 
exactly you’re trying to do.Maybe there’s other ways to achieve what you 
want, maybe with somehow leveraging a StopFilter-like facility to remove phrase 
terms.   edismax has some stop word (inclusion, rather than exclusion, though) 
magic with the pf2 and pf3 and stopwords parameters - maybe worth leveraging 
something like how that works or maybe adding some options or pluggability to 
the edismax phrase/stopword facility?

  Erik




On Feb 3, 2016, at 6:05 AM, Gerald Reinhart <gerald.reinh...@kelkoo.com> wrote:

On 02/02/2016 03:20 PM, Erik Hatcher wrote:

On Feb 2, 2016, at 8:57 AM, Elodie Sannier <elodie.sann...@kelkoo.fr> wrote:

Hello,

We are using solr 4.10.4 and we want to update to 5.4.1.

With solr 4.10.4:
- we extend PhraseQuery with a custom class in order to remove some
terms from phrase queries with phrase slop (update of add(Term term, int
position) method)
- in order to use our implementation, we extend ExtendedSolrQueryParser
with a custom class and we override the method newPhraseQuery but with
solr 5 this method does not exist anymore

How can we do this with solr 5.4.1 ?

You’ll want to override this method, it looks like:

protected Query getFieldQuery(String field, String queryText, int slop)



Hi Erik,

To change the behavior of the PhraseQuery either:
   - we change it after the natural cycle. The PhraseQuery is supposed
to be immutable and the setSlop(int s) is deprecated... we don't really
want to do this.
   - we override the code that actually build it :
org.apache.solr.search.ExtendedDismaxQParser.getFieldQuery(String field,
String val, int slop) PROTECTED
  use getAliasedQuery() PROTECTED
use getQuery() PRIVATE which use new PhraseQuery.Builder()
to create the query...

so not easy to override the behavior: we would need to
overrride/duplicate getAliasedQuery() and getQuery() methods. we don't
really want to do this either.

So we don't really know where to go.

Thanks,


Gerald (I'm working with Elodie on the subject)



Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.



--

Kelkoo



*Gérald Reinhart *Software engineer

*E*gerald.reinh...@kelkoo.com <mailto:steve.con...@kelkoo.com>
*Y!Messenger*gerald.reinhart

*A*Rue des Méridiens 38130 Echirolles






Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.