Re: Solr8.0.0 Performance Test

2019-05-20 Thread Kayak28
Hello, Shawn, Toke Eskildsen and Solr Community:

> Since version 7.5, optimize with TieredMergePolicy (the default policy)
> respects the maximum segment size, which defaults to 5GB.
Thank you for your reply.
Indeed,  there was approximately a total of 9GB index size.
So, as you tell me, TieredMergePolicy respected to the maximum size.

> Your PDF attachment did not make it to the list.  We cannot see it.  The
> mailing list rarely lets attachments through.
Thank you for your response.
I am sorry for sending an attachment, which mailing list does not send to.
I did not know about that.
For the next opportunity to share table-formatted data, what is the best
way to share data with all of you?
Should I share a Google spreadsheet URL?  How people usually do?

> The one place where there is a real difference is with String faceting
> as >= 2 segments means that String ordinals must be coordinated between
> segments

Let me confirm what you say (since I am not a native speaker of English and
I am new to Solr)
When we request a facet query (like below) to 2 Solrs: one with multiple
segments and the other is only one segment,
we will have different performance result?

"http://localhost:8983/solr/collection1/
select?q=*:*&
fl=id,cat,manu,price&
indent=true&
rows=2&
facet=true&
facet.field=price&
facet.sort=index&
facet.limit=10"

Can we configure up-front faceting or one-the-fly facet method?
I would like to see the difference to understand Solr's behavior if it is
possible.


Sincerely,
Kaya Ota






2019年5月20日(月) 18:38 Toke Eskildsen :

> On Sun, 2019-05-19 at 13:36 -0600, Shawn Heisey wrote:
> > I would not expect to see a really noticeable performance increase
> > by going from two segments to one.
>
> The one place where there is a real difference is with String faceting
> as >= 2 segments means that String ordinals must be coordinated between
> segments. Depending on faceting method this is done up-front (with an
> obvious fist-call time penalty, some memory overhead to hold the
> mapping and a slight running overhead for consulting the map) or on-
> the-fly (with non-surprising running overhead).
>
> It becomes relevant with large indexes and/or setups where performance
> is very important.
>
> - Toke Eskildsen, Royal Danish library
>
>
>


Re: How to use encrypted username password.

2019-05-20 Thread Joel Bernstein
Typically basic auth is encrypted using SSL.



Joel Bernstein
http://joelsolr.blogspot.com/


On Mon, May 20, 2019 at 6:49 PM Gangadhar Gangadhar 
wrote:

> Hi,
>
>I’m trying to explore if there is any way to encrypt -basicauth or
> encrypt username and password in -Dsolr.httpclient.config.
>
> Thanks
> Gangadhar
>


How to use encrypted username password.

2019-05-20 Thread Gangadhar Gangadhar
Hi,

   I’m trying to explore if there is any way to encrypt -basicauth or
encrypt username and password in -Dsolr.httpclient.config.

Thanks
Gangadhar


ConcurrentModificationException when nesting phrases inside a proximity search (ComplexPhraseQueryParser)

2019-05-20 Thread Matthew Kay
Trying to perform a search with the following structure:

query: "term1 (term2 OR \"multiword phrase term\")"~3
url: solr/collection/select?defType=complexphrase=text=AND="term1 
(term2 OR \"multiword phrase term\")"~3

This would hopefully match phrases like
- term1 and term2
- term1 and also term2
- term1 with multiword phrase term

This query structure is especially useful when the set of terms in the 
parentheses is very long, making query expansion difficult.

Testing this query in both 7.6.0 and 8.1, the following exception is thrown:

null:java.util.ConcurrentModificationException
at java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:1042)
at java.base/java.util.ArrayList$Itr.next(ArrayList.java:996)
at 
org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser.parse(ComplexPhraseQueryParser.java:133)
at 
org.apache.solr.search.ComplexPhraseQParserPlugin$ComplexPhraseQParser.parse(ComplexPhraseQParserPlugin.java:164)
at org.apache.solr.search.QParser.getQuery(QParser.java:173)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:291)
at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:151)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
at java.base/java.lang.Thread.run(Thread.java:834)

I know that this is thrown only when the nested terms have quotes. Is there a 
better way to execute a query like this?

Thanks,

Matt

__

The information in this email may be confidential, 

ConcurrentModificationException when nesting phrases inside a proximity search (ComplexPhraseQueryParser)

2019-05-20 Thread Matthew Kay
Trying to perform a search with the following structure:

query: "term1 (term2 OR \"multiword phrase term\")"~3
url: solr/collection/select?defType=complexphrase=text=AND="term1 
(term2 OR \"multiword phrase term\")"~3

This would hopefully match phrases like
- term1 and term2
- term1 and also term2
- term1 with multiword phrase term

This query structure is especially useful when the set of terms in the 
parentheses is very long, making query expansion difficult.

Testing this query in both 7.6.0 and 8.1, the following exception is thrown:

null:java.util.ConcurrentModificationException
at java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:1042)
at java.base/java.util.ArrayList$Itr.next(ArrayList.java:996)
at 
org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser.parse(ComplexPhraseQueryParser.java:133)
at 
org.apache.solr.search.ComplexPhraseQParserPlugin$ComplexPhraseQParser.parse(ComplexPhraseQParserPlugin.java:164)
at org.apache.solr.search.QParser.getQuery(QParser.java:173)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:291)
at org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:151)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
at java.base/java.lang.Thread.run(Thread.java:834)

I know that this is thrown only when the nested terms have quotes. Is there a 
better way to execute a query like this?

Thanks

__

The information in this email may be confidential, legally 

Re: Sort on docValue field is slow.

2019-05-20 Thread Erick Erickson
Shawn’s right. You have a mixed index, some segments have docValues and some 
don’t. So yes, you do need to reindex everything before drawing conclusions. To 
make matters worse, when you start indexing documents new segments with 
docvalues will eventually be merged with segments that don’t have docValues, 
leading to significant inconsistencies.

As with all sorting, you can tell nothing from one test. The first time a field 
is accessed for sorting it must be read from disk in either case (docValues 
true or false). The difference is that with docValues=true, the “uninverted” 
structure must be built from the indexed values on the Java heap. In the 
docValues=true case, it’s just un-serialized from disk into the OS memory.

Point is that after you’ve completely re-indexed everything (and I would, 
indeed, use a new collection) the first time you use the field it’ll take extra 
time. You can’t draw any valid conclusions until you average over quite a 
number of queries or throw out the first few times.

Best,
Erick

> On May 20, 2019, at 8:30 AM, Shawn Heisey  wrote:
> 
> On 5/20/2019 8:59 AM, Ashwin Ramesh wrote:
>> Hi Shawn,
>> Thanks for the prompt response.
>> 1. date type def - > positionIncrementGap="0" />
>> 2. The field is brand new. I added it to schema.xml, uploaded to ZK &
>> reloaded the collection. After that we started indexing the few thousand.
>> Did we still need to do a full reindex to a fresh collection?
>> 3. It is the only difference. I am testing the raw URL call timing
>> difference with and without the extra sort.
> 
> As I understand it, the docValues data will not be correct for the existing 
> documents if they are not all reindexed.  If I am wrong, I am sure somebody 
> will correct me.  Although I would not expect that to make things slow, the 
> internal Lucene details are not something I have a lot of insight into.
> 
> Thanks,
> Shawn



Re: Sort on docValue field is slow.

2019-05-20 Thread Shawn Heisey

On 5/20/2019 8:59 AM, Ashwin Ramesh wrote:

Hi Shawn,

Thanks for the prompt response.

1. date type def - 

2. The field is brand new. I added it to schema.xml, uploaded to ZK &
reloaded the collection. After that we started indexing the few thousand.
Did we still need to do a full reindex to a fresh collection?

3. It is the only difference. I am testing the raw URL call timing
difference with and without the extra sort.


As I understand it, the docValues data will not be correct for the 
existing documents if they are not all reindexed.  If I am wrong, I am 
sure somebody will correct me.  Although I would not expect that to make 
things slow, the internal Lucene details are not something I have a lot 
of insight into.


Thanks,
Shawn


Re: Sort on docValue field is slow.

2019-05-20 Thread Ashwin Ramesh
Hi Shawn,

Thanks for the prompt response.

1. date type def - 

2. The field is brand new. I added it to schema.xml, uploaded to ZK &
reloaded the collection. After that we started indexing the few thousand.
Did we still need to do a full reindex to a fresh collection?

3. It is the only difference. I am testing the raw URL call timing
difference with and without the extra sort.

Hope this helps,

Regards,

Ash



On Mon, May 20, 2019 at 11:17 PM Shawn Heisey  wrote:

> On 5/20/2019 6:25 AM, Ashwin Ramesh wrote:
> > Hoping to get advice on a specific issue - We have a collection of 50M
> > documents. We recently added a featuredAt field defined as such -
> >
> >  > required="false"
> > multiValued="false" docValues="true"/>
>
> What is the fieldType definition for "date"?  We cannot assume that you
> have left this the same as Solr's sample configs.
>
> > This field is sparely populated such that only a small subset (3-5
> thousand
> > currently) have been tagged with that field.
>
> Did you completely reindex, or just index those few thousand records?
> When changing fields related to docValues, you must completely delete
> the old index and reindex.  That's just how docValues works.
>
> > We have a business case where we want to order this content by most
> > recently featured -> least recently featured -> the rest of the content
> in
> > any order. However adding the `sort=featuredAt desc` param results in
> qTime
> >> 5000 (our hard timeout is 5000).
>
> Is the definition of the sort parameter the ONLY difference?  Are you
> querying on the new field?  Can you share the entire query URL, or the
> code that produced it if you're using a Solr client?  What is the before
> QTime?
>
> Thanks,
> Shawn
>

-- 
*P.S. We've launched a new blog to share the latest ideas and case studies 
from our team. Check it out here: product.canva.com 
. ***
** Empowering the 
world to design
Also, we're hiring. Apply here! 

  
  
    
  








Re: Sort on docValue field is slow.

2019-05-20 Thread Shawn Heisey

On 5/20/2019 6:25 AM, Ashwin Ramesh wrote:

Hoping to get advice on a specific issue - We have a collection of 50M
documents. We recently added a featuredAt field defined as such -




What is the fieldType definition for "date"?  We cannot assume that you 
have left this the same as Solr's sample configs.



This field is sparely populated such that only a small subset (3-5 thousand
currently) have been tagged with that field.


Did you completely reindex, or just index those few thousand records? 
When changing fields related to docValues, you must completely delete 
the old index and reindex.  That's just how docValues works.



We have a business case where we want to order this content by most
recently featured -> least recently featured -> the rest of the content in
any order. However adding the `sort=featuredAt desc` param results in qTime

5000 (our hard timeout is 5000).


Is the definition of the sort parameter the ONLY difference?  Are you 
querying on the new field?  Can you share the entire query URL, or the 
code that produced it if you're using a Solr client?  What is the before 
QTime?


Thanks,
Shawn


Sort on docValue field is slow.

2019-05-20 Thread Ashwin Ramesh
Hello everybody,

Hoping to get advice on a specific issue - We have a collection of 50M
documents. We recently added a featuredAt field defined as such -



This field is sparely populated such that only a small subset (3-5 thousand
currently) have been tagged with that field.

We have a business case where we want to order this content by most
recently featured -> least recently featured -> the rest of the content in
any order. However adding the `sort=featuredAt desc` param results in qTime
> 5000 (our hard timeout is 5000).

The request handler processing this request is defined as follows:

  
*
  
  
id
edismax
10
id
  
  
elevator
  


We hydrate content with a seperate store.

Any advice as to how to improve the performance of this request handler +
sorting.

System/Architecture Specs:
Solr 7.4
8 Shards
TLOG / PULLs

Thank you & Regards,

Ash

-- 
*P.S. We've launched a new blog to share the latest ideas and case studies 
from our team. Check it out here: product.canva.com 
. ***
** Empowering the 
world to design
Also, we're hiring. Apply here! 

  
  
    
  








How to define nested document schema

2019-05-20 Thread derrick cui
Hi, I have a nested document, how should I define this schema?
How to use addChildDocument in solr-solrj?
Thanks
Derrick

Sent from Yahoo Mail for iPhone


question on MLT params

2019-05-20 Thread Dmitry Kan
Hello group,

Been building a POC with MLT -- a great feature so far.

Wanted to clarify my understanding of the mlt.maxntp parameter:

mlt.maxntp -- in the doc it says "maximum number of tokens to parse in each
example doc field that is not stored with TermVector
 support." Will the tokens be
parsed in the order of appearance in the stored field (same as raw input)
or some prioritization like TF*IDF is going to be applied?

Thanks,

Dmitry

-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
Insider Solutions: https://semanticanalyzer.info


Clustering on a Query grouped ?

2019-05-20 Thread Bruno Mannina
Dear Solr Users,



I would like to know if it’s possible to do a clustering on a query grouped.



In my project, I get only one document by group (because all other documents
from the same group are just equivalents with a different Id)

So I want to do clustering with this result.



Having only one document per group allows me to increase the max number of
different documents used to create cluster.

Each documents have a Family Id. This field is used to create groups.



Thanks a lot for your help.



Cordialement, Best Regards

Bruno Mannina

  www.matheo-software.com

  www.patent-pulse.com

Tél. +33 0 970 738 743

Mob. +33 0 634 421 817





---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus


Re: Graph query extremely slow

2019-05-20 Thread Toke Eskildsen
On Sun, 2019-05-19 at 14:34 -0400, Rahul Goswami wrote:
> Just following up in case my previous email got lost in the big stack
> of queries. Would appreciate any help on optimizing a graph query. Or
> any pointers on  the direction to investigate.

This seems related to https://issues.apache.org/jira/browse/SOLR-13013

If it is easy for you to test, you could try Solr 8 as that should work
better for random access of DocValues.

- Toke Eskildsen, Royal Danish Library




Re: Solr8.0.0 Performance Test

2019-05-20 Thread Toke Eskildsen
On Sun, 2019-05-19 at 13:36 -0600, Shawn Heisey wrote:
> I would not expect to see a really noticeable performance increase
> by going from two segments to one.

The one place where there is a real difference is with String faceting
as >= 2 segments means that String ordinals must be coordinated between
segments. Depending on faceting method this is done up-front (with an
obvious fist-call time penalty, some memory overhead to hold the
mapping and a slight running overhead for consulting the map) or on-
the-fly (with non-surprising running overhead).

It becomes relevant with large indexes and/or setups where performance
is very important.

- Toke Eskildsen, Royal Danish library




WordDelimiterGraphFactory with preserveOriginal issue

2019-05-20 Thread rodio
Hi everybody,

I'm using solr 8.0.0 and I'm stuck in a weird behaviour that I cannot solve
by myself.
This is my fieldType config:


  






  
  





  


The problem is that I need preserveOriginal=1 in both analyzers and the
results are not right when launch a query with another field.

For example if a run this query:

idWeb: X AND name:(Leimhzolz 18x600x200 mm)

The parsed query is:

+idWeb:X +(name:leimhzolz (name:18x600x200 (+name:18 +name:x +name:600
+name:x +name:200)) name:mm)

Only docs with "18x600x200" or "mm" are scored. No score for "18" or "x" or
"600"

If a run this query:

name:(Leimhzolz 18x600x200 mm)

The parsed query is:

name:leimhzolz (name:18x600x200 (+name:18 +name:x +name:600 +name:x
+name:200)) name:mm

In this case, there are docs with "18", "x", "600" with score > 0

I have tried all kind of combination without success

I would be very glad if anyone has a solution for this matter

Many thanks in advance

Kind regards






--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html