Frange Alternative in the filter query

2019-01-23 Thread Aman deep singh

Hi,
I have created a value source parser ,to use the parser in the filter query i 
was using the frange function,But using the frange function is giving the 
really bad performance (4x of current),my value source parser performance is 
almost same when used in sort and fl ,Only performance degrade when i try to 
use that in the filter query using frange.
Is their any alternative of frange or any other thing i can do so the 
performance shouldn’t degrade

I have already tried by introducing the cost factor (cost=200) in the frange 
also so that the filter will be applied as post filter.

Regards,
Aman Deep Singh

Re: Difference in queryString and Parsed query

2019-01-21 Thread Aman deep singh
Hi Lavanya,
This is probably due to the kstem Filter factory it is removing the y charactor 
,since the stemmer has rule of words ending with y .


Regards,
Aman Deep Singh

> On 21-Jan-2019, at 5:43 PM, Mikhail Khludnev  wrote:
> 
> querystring  is what goes into QPaser,  parsedquery  is
> LuceneQuery.toString()
> 
> On Mon, Jan 21, 2019 at 3:04 PM Lavanya Thirumalaisami
>  wrote:
> 
>> Hi,
>> Our solr search is not returning expected results for keywords ending with
>> the character 'y'.
>> For example keywords like battery, way, accessory etc.
>> I tried debugging the solr query in solr admin console and i find there is
>> a difference between query string and parsed query.
>> "querystring":"battery","parsedquery":"batteri",
>> Also I find that if i search omitting the character y i am getting all the
>> results.
>> This happens only for keywords ending with Y and most others we donot have
>> this issue.
>> Could any one please help me understand why is the keywords gets changed,
>> specially the last character. Is there any issues in my field type
>> definition.
>> While indexing the data we use the text data type and we have defined as
>> follows
>> > positionIncrementGap="100">  > class="solr.WhitespaceTokenizerFactory" /> > class="solr.LowerCaseFilterFactory" /> > class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true"
>> expand="true"/> > words="stopwords.txt" /> > catenateWords="1" class="solr.WordDelimiterFilterFactory"
>> generateNumberParts="0" generateWordParts="0" preserveOriginal="1"
>> splitOnCaseChange="0" splitOnNumerics="0" /> > class="solr.RemoveDuplicatesTokenFilterFactory" /> > class="solr.KStemFilterFactory" /> > class="solr.EdgeNGramFilterFactory" maxGramSize="255" minGramSize="1" />
>>> type="query">  > class="solr.LowerCaseFilterFactory" /> > class="solr.PorterStemFilterFactory" /> > class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true"
>> expand="true"/> > words="stopwords.txt" /> > catenateWords="0" class="solr.WordDelimiterFilterFactory"
>> generateNumberParts="0" generateWordParts="0" preserveOriginal="1"
>> splitOnCaseChange="0" splitOnNumerics="0" /> > class="solr.KStemFilterFactory" />  
>> 
>> Regards,Lavanya
> 
> 
> 
> -- 
> Sincerely yours
> Mikhail Khludnev



Solr Expand throws NPE along with elevate component

2018-02-20 Thread Aman Deep singh
001-1,TOA-15631-2-1,TOA-15632-5-1,TOA-15632-00039-1,TOA-15633-1-1=json=true>

Note:- if i remove some of the ids it starts working ,and issue is not coming 
for all the queries.

Regards,
Aman Deep Singh

Solr scale function scale single doc to min value

2018-02-01 Thread Aman Deep Singh
Hi, I'm using scale function and facing a issue where my result set
contains only one result or multiple results with same value in this case
scale is sending data at min level/instead of high value,any idea how can I
achieve the high value in case only one result is present or multiple
results with same value
Solr version = 6.6.0
Scale function = scale(query({!type=edismax v=$q}),0,100)

Regards
Aman deep singh


Re: Solr mm is field Level in case sow is false

2017-11-28 Thread Aman Deep singh
HI Steve,
I can’t use the copy field because i have multiple types of field ,which uses 
different type of data ,examples are
1. Normal Tokenized field (normal fields)
2. Word deliminated field 
3. synonyms field (synonyms can be applied on one or two fields not all 
according to our requirement)
4. Ngrams field (model related field, partial word matches)

> On 29-Nov-2017, at 8:30 AM, Steve Rowe <sar...@gmail.com> wrote:
> 
> Hi Aman, see my responses inline below.
> 
>> On Nov 28, 2017, at 9:11 PM, Aman Deep Singh <amandeep.coo...@gmail.com> 
>> wrote:
>> 
>> Thanks steve,
>> I got it but my problem is u can't make the every field with same analysis,
> 
> I don’t understand: why can’t you use copy fields with all the same analysis?
> 
>> Is there any chance that sow and mm will work properly ,I don't see this in
>> future pipe line also,as their is no jira related to this.
> 
> I wrote up a description of an idea I had about addressing it in a reply to 
> Doug Turnbull's thread on this subject, linked from my blog: from 
> <http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3cf9297676-de1a-4c2d-928d-76fdbe75f...@gmail.com%3e>:
> 
>> In implementing the SOLR-9185 changtes, I considered a compromise approach 
>> to the term-centric
>> / field-centric axis you describe in the case of differing field analysis 
>> pipelines: finding
>> common source-text-offset bounded slices in all per-field queries, and then 
>> producing dismax
>> queries over these slices; this is a generalization of what happens in the 
>> sow=true case,
>> where slice points are pre-determined by whitespace.  However, it looked 
>> really complicated
>> to maintain source text offsets with queries (if you’re interested, you can 
>> see an example
>> of the kind of thing I’m talking about in my initial patch on 
>> <https://issues.apache.org/jira/browse/LUCENE-7533>, which I ultimately 
>> decided against committing), so I decided to go with per-field dismax when
>> structural differences are encountered in the per-field queries.  While I 
>> won’t be doing
>> any work on this short term, I still think the above-described approach 
>> could improve the
>> situation in the sow=false/differing-field-analysis case.  Patches welcome!
> 
> --
> Steve
> www.lucidworks.com
> 

Thanks,
Aman Deep Singh

Re: Solr mm is field Level in case sow is false

2017-11-28 Thread Aman Deep Singh
Thanks steve,
I got it but my problem is u can't make the every field with same analysis,
Is there any chance that sow and mm will work properly ,I don't see this in
future pipe line also,as their is no jira related to this.

Thanks ,
Aman deep singh


On 28-Nov-2017 8:02 PM, "Steve Rowe" <sar...@gmail.com> wrote:

Hi Aman,

>From the last bullet in the “Caveats and remaining issues” section of m
query-time multi-word synonyms blog: <https://lucidworks.com/2017/
04/18/multi-word-synonyms-solr-adds-query-time-support/>, in part:

> sow=false changes the queries edismax produces over multiple fields when
> any of the fields’ query-time analysis differs from the other fields’
[...]
> This can change results in general, but quite significantly when combined
> with the mm (min-should-match) request parameter: since min-should-match
> applies per field instead of per term, missing terms in one field’s
analysis
> won’t disqualify docs from matching.

One effective way of addressing this issue is to make all queried fields
use the same analysis, e.g. by copy-fielding the subset of fields that are
different into ones that are the same, and then querying against the target
fields instead.

--
Steve
www.lucidworks.com

> On Nov 28, 2017, at 5:25 AM, Aman Deep singh <amandeep.coo...@gmail.com>
wrote:
>
> Hi,
> When sow is set to false then solr query is generated a little
differently as compared to sow=true
>
> Solr version -6.6.1
>
> User query -Asus ZenFone Go ZB5 Smartphone
> mm is set to 100%
> qf=nameSearch^7 brandSearch
>
> field definition
>
> 1. nameSearch—
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 
>
> 2. brandSearch
> 
>
>
>
>
>
>
>
>
> 
>
>
> with sow=false
> "parsedquery":"(+DisjunctionMaxQuerybrandSearch:asus
brandSearch:zenfone brandSearch:go brandSearch:zb5
brandSearch:smartphone)~5) | ((nameSearch:asus nameSearch:zen
nameSearch:fone nameSearch:go nameSearch:zb nameSearch:5
nameSearch:smartphone)~7)^7.0)))/no_coord",
>
> with sow=true
> "parsedquery":"(+(DisjunctionMaxQuery((brandSearch:asus |
(nameSearch:asus)^7.0)) DisjunctionMaxQuery((brandSearch:zenfone |
((nameSearch:zen nameSearch:fone)~2)^7.0)) DisjunctionMaxQuery((brandSearch:go
| (nameSearch:go)^7.0)) DisjunctionMaxQuery((brandSearch:zb5 |
((nameSearch:zb nameSearch:5)~2)^7.0))
DisjunctionMaxQuery((brandSearch:smartphone
| (nameSearch:smartphone)^7.0)))~5)/no_coord",
>
>
>
> If you see the difference in parsed query in sow=false case mm is working
as field level while in case of sow=true mm is working across the field
>
> We need to use sow=false as it is the only way to use multiword synonyms
> Any idea why it is behaving in this manner and any way to fix so that mm
will work across fields in qf.
>
> Thanks,
> Aman Deep Singh


Solr mm is field Level in case sow is false

2017-11-28 Thread Aman Deep singh
Hi,
When sow is set to false then solr query is generated a little differently as 
compared to sow=true

Solr version -6.6.1

User query -Asus ZenFone Go ZB5 Smartphone
mm is set to 100%
qf=nameSearch^7 brandSearch

field definition

1. nameSearch—




















2. brandSearch












with sow=false
"parsedquery":"(+DisjunctionMaxQuerybrandSearch:asus brandSearch:zenfone 
brandSearch:go brandSearch:zb5 brandSearch:smartphone)~5) | ((nameSearch:asus 
nameSearch:zen nameSearch:fone nameSearch:go nameSearch:zb nameSearch:5 
nameSearch:smartphone)~7)^7.0)))/no_coord",

with sow=true
"parsedquery":"(+(DisjunctionMaxQuery((brandSearch:asus | 
(nameSearch:asus)^7.0)) DisjunctionMaxQuery((brandSearch:zenfone | 
((nameSearch:zen nameSearch:fone)~2)^7.0)) DisjunctionMaxQuery((brandSearch:go 
| (nameSearch:go)^7.0)) DisjunctionMaxQuery((brandSearch:zb5 | ((nameSearch:zb 
nameSearch:5)~2)^7.0)) DisjunctionMaxQuery((brandSearch:smartphone | 
(nameSearch:smartphone)^7.0)))~5)/no_coord",



If you see the difference in parsed query in sow=false case mm is working as 
field level while in case of sow=true mm is working across the field

We need to use sow=false as it is the only way to use multiword synonyms
Any idea why it is behaving in this manner and any way to fix so that mm will 
work across fields in qf.

Thanks,
Aman Deep Singh

Re: mm is not working if you have same term multiple times in query

2017-09-22 Thread Aman Deep Singh
We can't use shingles as user can query lock and lock ,or any other
combination although and and some other words can be passed in stop word
processing but can't rely on that completely.

On 22-Sep-2017 7:00 PM, "Emir Arnautović" <emir.arnauto...@sematext.com>
wrote:

It seems to me that all OOTB solution would include some query parsing on
client side.
If those are adjacent values, you could try play with shingles to get it to
work.
Brainstorming: custom token filter that would assign token occurrence
number to each token: e.g.
“foo lock bar lock” would be indexed as foo1 lock1 bar1 lock2, but that
would mess score…

Maybe there is something specific about your usecase that could be used to
make it work.

Emir

> On 22 Sep 2017, at 15:17, Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:
>
> Hi Emir,
> Thanks for the reply,
> I understand how the dismax/edismax works ,my problem is I don't want to
> show the results with one token only ,
> I cannot use phrase query here because the phrase query doesn't work with
> single word query so to do so we need to change the search request (qf or
> pf )dynamically ,will definitely try to use the function query.
>
> Thanks,
> Aman Deep Singh
>
> On 22-Sep-2017 6:25 PM, "Emir Arnautović" <emir.arnauto...@sematext.com>
> wrote:
>
>> Hi Aman,
>> You have wrong expectations: Edismax does respect mm, it’s just that it
is
>> met. If you take a look at parsed query, it’ll be something like:
>> +(((name:lock) (name:lock))~2)
>> And from dismax perspective it found both terms. It will not start
>> searching for the next term after first is found or look at term
frequency.
>> You can use phrase query to make sure that lock is close to lock or use
>> function query to make sure tf requirement is met.
>> Not sure what is your usecase.
>>
>> HTH,
>> Emir
>>
>>> On 22 Sep 2017, at 12:52, Aman Deep Singh <amandeep.coo...@gmail.com>
>> wrote:
>>>
>>> Hi,
>>> I'm using Solr 6.6.0 i have set mm as 100% but when i have the repeated
>>> search term then mm param is not honoured
>>>
>>> I have 2 docs in index
>>> Doc1-
>>> name=lock
>>> Doc 2-
>>> name=lock lock
>>>
>>> Now when i'm quering the solr with query
>>> *
>> http://localhost:8983/solr/test2/select?defType=dismax;
qf=name=on=100%25=lock%20lock=json
>>> <
>> http://localhost:8983/solr/test2/select?defType=dismax;
qf=name=on=100%25=lock%20lock=json
>>> *
>>> then it is returning both results but it should return only Doc 2 as no
>> of
>>> frequency is 2 in query while doc1 has frequency of 1 (lock term
>> frequency).
>>> Any Idea what to do ,to avoid getting doc 1 in resultset as i don't want
>>> user to get the Doc1.
>>> Schema
>>> > stored="true"/>
>>> >> autoGeneratePhraseQueries="false" positionIncrementGap="100"> > type
>>> ="index">  > class=
>>> "solr.LowerCaseFilterFactory"/>   <
>>> tokenizer class="solr.StandardTokenizerFactory"/> >> "solr.ManagedSynonymFilterFactory" managed="synonyms_gdn"/> > class=
>>> "solr.LowerCaseFilterFactory"/>  
>>>
>>> Their is no synonym is added also.
>>>
>>> Thanks,
>>> Aman Deep Singh
>>
>>


Re: mm is not working if you have same term multiple times in query

2017-09-22 Thread Aman Deep Singh
Hi Emir,
Thanks for the reply,
I understand how the dismax/edismax works ,my problem is I don't want to
show the results with one token only ,
I cannot use phrase query here because the phrase query doesn't work with
single word query so to do so we need to change the search request (qf or
pf )dynamically ,will definitely try to use the function query.

Thanks,
Aman Deep Singh

On 22-Sep-2017 6:25 PM, "Emir Arnautović" <emir.arnauto...@sematext.com>
wrote:

> Hi Aman,
> You have wrong expectations: Edismax does respect mm, it’s just that it is
> met. If you take a look at parsed query, it’ll be something like:
> +(((name:lock) (name:lock))~2)
> And from dismax perspective it found both terms. It will not start
> searching for the next term after first is found or look at term frequency.
> You can use phrase query to make sure that lock is close to lock or use
> function query to make sure tf requirement is met.
> Not sure what is your usecase.
>
> HTH,
> Emir
>
> > On 22 Sep 2017, at 12:52, Aman Deep Singh <amandeep.coo...@gmail.com>
> wrote:
> >
> > Hi,
> > I'm using Solr 6.6.0 i have set mm as 100% but when i have the repeated
> > search term then mm param is not honoured
> >
> > I have 2 docs in index
> > Doc1-
> > name=lock
> > Doc 2-
> > name=lock lock
> >
> > Now when i'm quering the solr with query
> > *
> http://localhost:8983/solr/test2/select?defType=dismax=name=on=100%25=lock%20lock=json
> > <
> http://localhost:8983/solr/test2/select?defType=dismax=name=on=100%25=lock%20lock=json
> >*
> > then it is returning both results but it should return only Doc 2 as no
> of
> > frequency is 2 in query while doc1 has frequency of 1 (lock term
> frequency).
> > Any Idea what to do ,to avoid getting doc 1 in resultset as i don't want
> > user to get the Doc1.
> > Schema
> >  stored="true"/>
> >  > autoGeneratePhraseQueries="false" positionIncrementGap="100">  type
> > ="index">   class=
> > "solr.LowerCaseFilterFactory"/>   <
> > tokenizer class="solr.StandardTokenizerFactory"/>  > "solr.ManagedSynonymFilterFactory" managed="synonyms_gdn"/>  class=
> > "solr.LowerCaseFilterFactory"/>  
> >
> > Their is no synonym is added also.
> >
> > Thanks,
> > Aman Deep Singh
>
>


mm is not working if you have same term multiple times in query

2017-09-22 Thread Aman Deep Singh
Hi,
I'm using Solr 6.6.0 i have set mm as 100% but when i have the repeated
search term then mm param is not honoured

I have 2 docs in index
Doc1-
name=lock
Doc 2-
name=lock lock

Now when i'm quering the solr with query
*http://localhost:8983/solr/test2/select?defType=dismax=name=on=100%25=lock%20lock=json
<http://localhost:8983/solr/test2/select?defType=dismax=name=on=100%25=lock%20lock=json>*
then it is returning both results but it should return only Doc 2 as no of
frequency is 2 in query while doc1 has frequency of 1 (lock term frequency).
Any Idea what to do ,to avoid getting doc 1 in resultset as i don't want
user to get the Doc1.
Schema

  <
tokenizer class="solr.StandardTokenizerFactory"/>

Their is no synonym is added also.

Thanks,
Aman Deep Singh


Re: Case sensitive synonym value

2017-08-23 Thread Aman Deep Singh
Hi all,
Any update on this issue.
-Aman

On 10-Aug-2017 8:27 AM, "Aman Deep Singh" <amandeep.coo...@gmail.com> wrote:

> Yes,
> Ignore case is set to true and it is working fine
>
>
> On 10-Aug-2017 12:43 AM, "Erick Erickson" <erickerick...@gmail.com> wrote:
>
> You set ignoreCase="true" though, right?
>
> On Wed, Aug 9, 2017 at 8:39 AM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
> > Hi Erick,
> > I tried even before going to lowercase factory value is in lowercase
> >
> > this is the analysis tab result for the query  iwatch
> > where {"iwatch":["iWatch","appleWatch"]} is configured in managed
> synonym
> > ST
> > iwatch
> > SF
> > *applewatch*
> > *iwatch*
> > PRF
> > applewatch
> > iwatch
> > PRF
> > applewatch
> > iwatch
> > WDF
> > applewatch
> > iwatch
> > LCF
> > applewatch
> > iwatch
> >
> > Thanks,
> > Aman Deep Singh
> >
> > On 09-Aug-2017 8:46 PM, "Erick Erickson" <erickerick...@gmail.com>
> wrote:
> >
> >> Admin/analysis is a good place to start figuring this out. For
> >> instance, do you have lowerCaseFilter configured in your analysis
> >> chain somewhere that's doing the conversion?
> >>
> >> Best,
> >> Erick
> >>
> >> On Wed, Aug 9, 2017 at 5:34 AM, Aman Deep Singh
> >> <amandeep.coo...@gmail.com> wrote:
> >> > Hi,
> >> > I'm trying to use the ManagedSynonyms with *ignoreCase=true*
> >> > It is working fine for the identifying part but the problem comes in
> >> > synonym value
> >> > Suppose i have a synonym *iwatch ==>appleWatch,iWatch*
> >> > If the user query is *iwatch (in any case)*  it was identifying the
> >> synonym
> >> > and replace the token with *applewatch and iwatch*  (in
> lowercase),which
> >> i
> >> > didn't want
> >> > I need the synonyms to comes with the same case what i have configured
> >> > i.e. *appleWatch and iWatch*
> >> > Any idea on how to so that .
> >> >
> >> > Thanks,
> >> > Aman Deep Singh
> >>
>
>
>


Re: Arabic words search in solr

2017-08-13 Thread Aman Deep Singh
Try the edge ngram filter
https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.EdgeNGramFilterFactory
I think it will help you solve the problem

On Sun, Aug 13, 2017 at 7:08 PM mohanmca01 <mohanmc...@gmail.com> wrote:

> Hi Aman Deep Singh,
>
> Thanks for your update... I will update the status after complete the
> testing.
>
> I need one more help from your end,can you check below scenario:
>
> we are getting the results while using AND operator in between the words.
>
> Below is the example:
>
> Scenario 1:
>
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 1,
> "params": {
>   "indent": "true",
>   "q": "bizNameAr:(مسقط AND الاتصال)",
>   "_": "1501998206658",
>   "wt": "json"
> }
>   },
>   "response": {
> "numFound": 44,
> "start": 0,
> "docs": [
>   {
> "id": "56367",
> "bizNameAr": "بنك مسقط - مركز الاتصال",
> "_version_": 1574621133647380500
>   },
>   {
> "id": "27224",
> "bizNameAr": "بلدية مسقط -  - بلدية مسقط - مركز الاتصالات",
> "_version_": 1574621132817956900
>   },
>   {
> "id": "148922",
> "bizNameAr": "بنك مسقط - ميثاق - مركز الاتصال",
> "_version_": 1574621136335929300
>   },
>   {
> "id": "23695",
> "bizNameAr": "قوة السلطان الخاصة - مركز الإتصالات  - مسقط",
> "_version_": 1574621132683739100
>   },
>   {
> "id": "34992",
> "bizNameAr": "طوارئ الكهرباء - محافظة مسقط - مركز الاتصال",
> "_version_": 1574621133116801000
>   },
>   {
> "id": "96500",
> "bizNameAr": "شركة مسقط لتوزيع الكهرباء( ام اي دي سي)  - مركز
> الاتصال",
> "_version_": 1574621134575370200
>   },
>   {
> "id": "23966",
> "bizNameAr": "ديوان البلاط السلطاني - القصر - مسقط - المديرية
> العامة
> للاتصالات ونظم المعلومات -  - المديرية العامة للاتصالات ونظم المعلومات -
> البدالة",
> "_version_": 1574621132692127700
>   },
>   {
> "id": "24005",
> "bizNameAr": "ديوان البلاط السلطاني - القصر - مسقط - المديرية
> العامة
> للاتصالات ونظم المعلومات -  - مدير عام الاتصالات ونظم المعلومات -",
> "_version_": 1574621132694225000
>   },
>   {
> "id": "24026",
> "bizNameAr": "ديوان البلاط السلطاني - القصر - مسقط - المديرية
> العامة
> للاتصالات ونظم المعلومات -  - مساعد مدير عام الاتصالات ونظم المعلومات -",
> "_version_": 1574621132694225000
>   },
>   {
> "id": "24096",
> "bizNameAr": "ديوان البلاط السلطاني - القصر - مسقط - المديرية
> العامة
> للاتصالات ونظم المعلومات -  - مدير دائرة الاتصالات والصيانة -",
> "_version_": 1574621132697370600
>   }
> ]
>   }
> }
>
>
> Scenario 2:.
>
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 1,
> "params": {
>   "indent": "true",
>   "q": "bizNameAr:(مسقط AND الات)",
>   "_": "1501998438821",
>   "wt": "json"
> }
>   },
>   "response": {
> "numFound": 0,
> "start": 0,
> "docs": []
>   }
> }
>
> We are expecting same results in the scenario 2 as well where am not typing
> the second word fully as in scenario’s 2 input.
>
>
> Below are the inputs used in both scenarios:
>
> Scenario 1:
> First word: مسقط
> Second word: الاتصال
>
> Scenario 2:
> First word: مسقط
> Second word: الات
>
> However, in our current production environment both of the above scenarios
> are working fine, but we have an issue of “Hamza” character where we are
> not
> getting results unless typing “Hamza” if it’s there.
>
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 9,
> "params": {
>   "fl": "businessNmBl",
>   "indent": "true",
>   "q": "businessNmBl:شرطة إزكي",
>   "_": "1501997897849",
>   "wt": "json"
> }
>   },
>   "response": {
> "numFound": 1,
> "start": 0,
> "docs": [
>   {
> "businessNmBl": "شرطة عمان السلطانية - قيادة شرطة محافظة الداخلية
> -
> - مركز شرطة إزكي"
>   }
> ]
>   }
> }
>
> Thanks,
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Arabic-words-search-in-solr-tp4317733p4350392.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


RE: Arabic words search in solr

2017-08-13 Thread Aman Deep Singh
You can configure mm either in the request handler sorconfig.xml or pass as
a request parameter along side the user query
For more detail refer
 https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser

example of sample handler is


  
10
searchFields
100%
dismax
  

On 13-Aug-2017 6:43 PM, "mohanmca01"  wrote:

Hi Aman Deep,

Thanks for the information, In order to add mm=100% in the request handler,
in which place ?..Can you please share me sample snap. thanks in advance.






--
View this message in context:
http://lucene.472066.n3.nabble.com/Arabic-words-search-in-solr-tp4317733p4350389.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Case sensitive synonym value

2017-08-09 Thread Aman Deep Singh
Yes,
Ignore case is set to true and it is working fine


On 10-Aug-2017 12:43 AM, "Erick Erickson" <erickerick...@gmail.com> wrote:

You set ignoreCase="true" though, right?

On Wed, Aug 9, 2017 at 8:39 AM, Aman Deep Singh
<amandeep.coo...@gmail.com> wrote:
> Hi Erick,
> I tried even before going to lowercase factory value is in lowercase
>
> this is the analysis tab result for the query  iwatch
> where {"iwatch":["iWatch","appleWatch"]} is configured in managed synonym
> ST
> iwatch
> SF
> *applewatch*
> *iwatch*
> PRF
> applewatch
> iwatch
> PRF
> applewatch
> iwatch
> WDF
> applewatch
> iwatch
> LCF
> applewatch
> iwatch
>
> Thanks,
> Aman Deep Singh
>
> On 09-Aug-2017 8:46 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:
>
>> Admin/analysis is a good place to start figuring this out. For
>> instance, do you have lowerCaseFilter configured in your analysis
>> chain somewhere that's doing the conversion?
>>
>> Best,
>> Erick
>>
>> On Wed, Aug 9, 2017 at 5:34 AM, Aman Deep Singh
>> <amandeep.coo...@gmail.com> wrote:
>> > Hi,
>> > I'm trying to use the ManagedSynonyms with *ignoreCase=true*
>> > It is working fine for the identifying part but the problem comes in
>> > synonym value
>> > Suppose i have a synonym *iwatch ==>appleWatch,iWatch*
>> > If the user query is *iwatch (in any case)*  it was identifying the
>> synonym
>> > and replace the token with *applewatch and iwatch*  (in
lowercase),which
>> i
>> > didn't want
>> > I need the synonyms to comes with the same case what i have configured
>> > i.e. *appleWatch and iWatch*
>> > Any idea on how to so that .
>> >
>> > Thanks,
>> > Aman Deep Singh
>>


Re: Case sensitive synonym value

2017-08-09 Thread Aman Deep Singh
Hi Erick,
I tried even before going to lowercase factory value is in lowercase

this is the analysis tab result for the query  iwatch
where {"iwatch":["iWatch","appleWatch"]} is configured in managed synonym
ST
iwatch
SF
*applewatch*
*iwatch*
PRF
applewatch
iwatch
PRF
applewatch
iwatch
WDF
applewatch
iwatch
LCF
applewatch
iwatch

Thanks,
Aman Deep Singh

On 09-Aug-2017 8:46 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:

> Admin/analysis is a good place to start figuring this out. For
> instance, do you have lowerCaseFilter configured in your analysis
> chain somewhere that's doing the conversion?
>
> Best,
> Erick
>
> On Wed, Aug 9, 2017 at 5:34 AM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
> > Hi,
> > I'm trying to use the ManagedSynonyms with *ignoreCase=true*
> > It is working fine for the identifying part but the problem comes in
> > synonym value
> > Suppose i have a synonym *iwatch ==>appleWatch,iWatch*
> > If the user query is *iwatch (in any case)*  it was identifying the
> synonym
> > and replace the token with *applewatch and iwatch*  (in lowercase),which
> i
> > didn't want
> > I need the synonyms to comes with the same case what i have configured
> > i.e. *appleWatch and iWatch*
> > Any idea on how to so that .
> >
> > Thanks,
> > Aman Deep Singh
>


Case sensitive synonym value

2017-08-09 Thread Aman Deep Singh
Hi,
I'm trying to use the ManagedSynonyms with *ignoreCase=true*
It is working fine for the identifying part but the problem comes in
synonym value
Suppose i have a synonym *iwatch ==>appleWatch,iWatch*
If the user query is *iwatch (in any case)*  it was identifying the synonym
and replace the token with *applewatch and iwatch*  (in lowercase),which i
didn't want
I need the synonyms to comes with the same case what i have configured
i.e. *appleWatch and iWatch*
Any idea on how to so that .

Thanks,
Aman Deep Singh


RE: Arabic words search in solr

2017-08-06 Thread Aman Deep Singh
Use mm=100% in the request handler
It will give the same AND functionality


On 06-Aug-2017 11:59 AM, "mohanmca01"  wrote:

hello Allison.

thank you for the information.

i referred to your slide "33", yes we are looking for same kind of results
and solution.

would you please guide us on how to achieve this?

also, we would like to know Instead of putting AND operator in between the
words if there is another way of doing this by adding this in configuration
level.

thanks



--
View this message in context: http://lucene.472066.n3.
nabble.com/Arabic-words-search-in-solr-tp4317733p4349259.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Facet is not working while querying with group

2017-06-23 Thread Aman Deep Singh
1. No it is with schema with some dynamic fields but facet fields are
proper field
2. No copy field is stored field all are set as stored=false



On Fri, Jun 23, 2017 at 10:21 PM Erick Erickson <erickerick...@gmail.com>
wrote:

> OK, new collection.
>
> 1> With schemaless? When you add a document in schemaless mode, it
> makes some guesses that may not play nice later.
>
> 2> Are you storing the _destination_ of any copyField? Atomic updates
> do odd things if you set stored="true" for fields that are
> destinations for atomic updates, specifically accumulate values in
> them. You should set stored="false" for all destinations of copyField
> directives.
>
> Best,
> Erick
>
> On Fri, Jun 23, 2017 at 9:23 AM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
> > No Shawn,
> > I download the latest solr again then run without installing by command
> > ./bin/solr -c
> > after upload the fresh configset and create the new collection
> > Then create a single document in solr
> > after do atomic update
> > and the same error occurs again.
> >
> >
> > On Fri, Jun 23, 2017 at 7:53 PM Shawn Heisey <apa...@elyograg.org>
> wrote:
> >
> >> On 6/20/2017 11:01 PM, Aman Deep Singh wrote:
> >> > If I am using docValues=false getting this exception
> >> > java.lang.IllegalStateException: Type mismatch: isBlibliShipping was
> >> > indexed with multiple values per document, use SORTED_SET instead at
> >> >
> >>
> org.apache.solr.uninverting.FieldCacheImpl$SortedDocValuesCache.createValue(FieldCacheImpl.java:799)
> >> > at
> >> >
> >>
> org.apache.solr.uninverting.FieldCacheImpl$Cache.get(FieldCacheImpl.java:187)
> >> > at
> >> >
> >>
> org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:767)
> >> > at
> >> >
> >>
> org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:747)
> >> > at
> >> > But if docValues=true then getting this error
> >> > java.lang.IllegalStateException: unexpected docvalues type NUMERIC for
> >> > field 'isBlibliShipping' (expected=SORTED). Re-index with correct
> >> docvalues
> >> > type. at
> org.apache.lucene.index.DocValues.checkField(DocValues.java:212)
> >> > at org.apache.lucene.index.DocValues.getSorted(DocValues.java:264) at
> >> >
> >>
> org.apache.lucene.search.grouping.term.TermGroupFacetCollector$SV.doSetNextReader(TermGroupFacetCollector.java:129)
> >> > at
> >> >
> >>
> org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
> >> > at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
> >> at
> >> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:472)
> at
> >> >
> >>
> org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:692)
> >> > at
> >> >
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:476)
> >> > at
> >> >
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:405)
> >> > at
> >> >
> >>
> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:803)
> >> >
> >> > It Only appear in case when we facet on group query normal facet works
> >> fine
> >> >
> >> > Also appears only when we atomically update the document.
> >>
> >> These errors look like problems that appear when you *change* the
> >> schema, but try to use that new schema with an existing Lucene index
> >> directory.  As Erick already mentioned, certain changes in the schema
> >> *require* completely deleting the index directory and
> >> restarting/reloading, or starting with a brand new index.  Deleting all
> >> documents instead of wiping out the index may leave Lucene remnants with
> >> incorrect metadata for the new schema.
> >>
> >> What you've said elsewhere in the thread is that you're starting with a
> >> brand new collection ... but the error messages suggest that we're still
> >> dealing with an index where you had one schema setting, indexed some
> >> data, then changed the schema without completely wiping out the index
> >> from the disk.
> >>
> >> Thanks,
> >> Shawn
> >>
> >>
>


Re: Facet is not working while querying with group

2017-06-23 Thread Aman Deep Singh
No Shawn,
I download the latest solr again then run without installing by command
./bin/solr -c
after upload the fresh configset and create the new collection
Then create a single document in solr
after do atomic update
and the same error occurs again.


On Fri, Jun 23, 2017 at 7:53 PM Shawn Heisey <apa...@elyograg.org> wrote:

> On 6/20/2017 11:01 PM, Aman Deep Singh wrote:
> > If I am using docValues=false getting this exception
> > java.lang.IllegalStateException: Type mismatch: isBlibliShipping was
> > indexed with multiple values per document, use SORTED_SET instead at
> >
> org.apache.solr.uninverting.FieldCacheImpl$SortedDocValuesCache.createValue(FieldCacheImpl.java:799)
> > at
> >
> org.apache.solr.uninverting.FieldCacheImpl$Cache.get(FieldCacheImpl.java:187)
> > at
> >
> org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:767)
> > at
> >
> org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:747)
> > at
> > But if docValues=true then getting this error
> > java.lang.IllegalStateException: unexpected docvalues type NUMERIC for
> > field 'isBlibliShipping' (expected=SORTED). Re-index with correct
> docvalues
> > type. at org.apache.lucene.index.DocValues.checkField(DocValues.java:212)
> > at org.apache.lucene.index.DocValues.getSorted(DocValues.java:264) at
> >
> org.apache.lucene.search.grouping.term.TermGroupFacetCollector$SV.doSetNextReader(TermGroupFacetCollector.java:129)
> > at
> >
> org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
> > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
> at
> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:472) at
> >
> org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:692)
> > at
> > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:476)
> > at
> > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:405)
> > at
> >
> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:803)
> >
> > It Only appear in case when we facet on group query normal facet works
> fine
> >
> > Also appears only when we atomically update the document.
>
> These errors look like problems that appear when you *change* the
> schema, but try to use that new schema with an existing Lucene index
> directory.  As Erick already mentioned, certain changes in the schema
> *require* completely deleting the index directory and
> restarting/reloading, or starting with a brand new index.  Deleting all
> documents instead of wiping out the index may leave Lucene remnants with
> incorrect metadata for the new schema.
>
> What you've said elsewhere in the thread is that you're starting with a
> brand new collection ... but the error messages suggest that we're still
> dealing with an index where you had one schema setting, indexed some
> data, then changed the schema without completely wiping out the index
> from the disk.
>
> Thanks,
> Shawn
>
>


Re: Facet is not working while querying with group

2017-06-20 Thread Aman Deep Singh
Hi Shawn,
If I am using docValues=false getting this exception
java.lang.IllegalStateException: Type mismatch: isBlibliShipping was
indexed with multiple values per document, use SORTED_SET instead at
org.apache.solr.uninverting.FieldCacheImpl$SortedDocValuesCache.createValue(FieldCacheImpl.java:799)
at
org.apache.solr.uninverting.FieldCacheImpl$Cache.get(FieldCacheImpl.java:187)
at
org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:767)
at
org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:747)
at
But if docValues=true then getting this error
java.lang.IllegalStateException: unexpected docvalues type NUMERIC for
field 'isBlibliShipping' (expected=SORTED). Re-index with correct docvalues
type. at org.apache.lucene.index.DocValues.checkField(DocValues.java:212)
at org.apache.lucene.index.DocValues.getSorted(DocValues.java:264) at
org.apache.lucene.search.grouping.term.TermGroupFacetCollector$SV.doSetNextReader(TermGroupFacetCollector.java:129)
at
org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659) at
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:472) at
org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:692)
at
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:476)
at
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:405)
at
org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:803)

It Only appear in case when we facet on group query normal facet works fine

Also appears only when we atomically update the document.





On Tue, Jun 20, 2017 at 5:46 PM Shawn Heisey <apa...@elyograg.org> wrote:

> On 6/20/2017 12:07 AM, Aman Deep Singh wrote:
> > Again the same problem started to occur and I haven't change any schema
> > It's only coming to the Numeric data types only (tint,tdouble) and that
> too
> > in group query only
> > If I search with string field type it works fine.
> >
> > Steps which i have followed
> >
> >1. drop the old collection
> >2. create the new Collection
> >3. Do the full reindex
> >4. do atomic update on some fields multiple times
>
> If you're getting exactly the same error (Type mismatch:
> isBlibliShipping was indexed with multiple values per document, use
> SORTED_SET instead), then it means the schema on the index is changing
> after you have indexed data, in a way that won't work -- probably by
> changing multiValued or docValues.
>
> Thanks,
> Shawn
>
>


Re: Could not load collection from ZK:

2017-06-20 Thread Aman Deep Singh
Sorry Shawn,
It didn't copy entire stacktrace I put the stacktrace at
https://www.dropbox.com/s/zf8b87m24ei2ils/solr%20exception2?dl=0

Note: I have shaded the solr library under com.gdn.solr620  so all solr
class will be appear as com.gdn.solr620.org.apache.solr.*



On Tue, Jun 20, 2017 at 8:09 PM Shawn Heisey <apa...@elyograg.org> wrote:

> On 6/20/2017 8:25 AM, Aman Deep Singh wrote:
> > This error is coming in the application which is using solrj to
> communicate
> > to the solr
> > full stacktrace is
> >
> > Request processing failed; nested exception is
> com.gdn.solr620.org.apache.
> > solr.common.SolrException: Could not load collection from ZK:
> > productCollection
> 
> > Top command images are at
> >
> https://www.dropbox.com/sh/vxorykk8tmb6amb/AABYIcFuRyfSnlkS6I-Tr5HNa?dl=0
>
> Are there any "Caused by" clauses that come after that stacktrace?  It
> doesn't seem to be complete.  There are no Solr classes shown, so I
> can't see where in Solr code the exception occurred.  If it happened in
> Solr code, then there should be more to the error message.
>
> Thanks,
> Shawn
>
>


Re: Give boost only if entire value is present in Query

2017-06-20 Thread Aman Deep Singh
It was not matching the results for that particular field
below is the debug data

(+DisjunctionMaxQuerynameSearchNoSyn:7 nameSearchNoSyn:armour)~2)^9.0 |
((brandSearch:7 brandSearch:armour)~2) | ((nameSearch:7
nameSearch:armour)~2)^4.0 | (keywords:7 armour)^11.0 | ((descSearchNoSyn:7
descSearchNoSyn:armour)~2)^2.0 | ((Synonym(brandSearchQueryShingle:7
brandSearchQueryShingle:7armour) brandSearchQueryShingle:armour)~2)^10.0 |
((descriptionSearch:7 descriptionSearch:armour)~2) | (categoryKeywords:7
armour)^11.0)) DisjunctionMaxQuery(((nameSearch:"7 armour"~5)^9.0 |
(brandSearch:"7 armour"~5)^8.0 | (descriptionSearch:"7 armour"~5)^2.0))
DisjunctionMaxQuery(((nameSearch:"7 armour")^9.0 | (descriptionSearch:"7
armour")^2.0)))/no_coord


+(((nameSearchNoSyn:7 nameSearchNoSyn:armour)~2)^9.0 | ((brandSearch:7
brandSearch:armour)~2) | ((nameSearch:7 nameSearch:armour)~2)^4.0 |
(keywords:7 armour)^11.0 | ((descSearchNoSyn:7
descSearchNoSyn:armour)~2)^2.0 | ((Synonym(brandSearchQueryShingle:7
brandSearchQueryShingle:7armour) brandSearchQueryShingle:armour)~2)^10.0 |
((descriptionSearch:7 descriptionSearch:armour)~2) | (categoryKeywords:7
armour)^11.0) ((nameSearch:"7 armour"~5)^9.0 | (brandSearch:"7
armour"~5)^8.0 | (descriptionSearch:"7 armour"~5)^2.0) ((nameSearch:"7
armour")^9.0 | (descriptionSearch:"7 armour")^2.0)



231.43768 = sum of: 122.80731 = max of: 122.80731 = sum of: 39.3418 =
weight(nameSearchNoSyn:7 in 11675) [SchemaSimilarity], result of: 39.3418 =
score(doc=11675,freq=1.0 = termFreq=1.0 ), product of: 9.0 = boost
3.6432905 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
+ 0.5)) from: 38829.0 = docFreq 1483961.0 = docCount 1.199825 = tfNorm,
computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength /
avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 =
parameter b 11.993984 = avgFieldLength 7.11 = fieldLength 83.465515 =
weight(nameSearchNoSyn:armour in 11675) [SchemaSimilarity], result of:
83.465515 = score(doc=11675,freq=1.0 = termFreq=1.0 ), product of: 9.0 =
boost 7.729415 = idf, computed as log(1 + (docCount - docFreq + 0.5) /
(docFreq + 0.5)) from: 652.0 = docFreq 1483961.0 = docCount 1.199825 =
tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b *
fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1
0.75 = parameter b 11.993984 = avgFieldLength 7.11 = fieldLength
10.923981 = sum of: 5.468917 = weight(brandSearch:7 in 11675)
[SchemaSimilarity], result of: 5.468917 = score(doc=11675,freq=1.0 =
termFreq=1.0 ), product of: 7.810959 = idf, computed as log(1 + (docCount -
docFreq + 0.5) / (docFreq + 0.5)) from: 600.0 = docFreq 1481730.0 =
docCount 0.7001595 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 *
(1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 =
parameter k1 0.75 = parameter b 1.2507185 = avgFieldLength 2.56 =
fieldLength 5.4550633 = weight(brandSearch:armour in 11675)
[SchemaSimilarity], result of: 5.4550633 = score(doc=11675,freq=1.0 =
termFreq=1.0 ), product of: 7.7911725 = idf, computed as log(1 + (docCount
- docFreq + 0.5) / (docFreq + 0.5)) from: 612.0 = docFreq 1481730.0 =
docCount 0.7001595 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 *
(1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 =
parameter k1 0.75 = parameter b 1.2507185 = avgFieldLength 2.56 =
fieldLength 54.581028 = sum of: 17.485245 = weight(nameSearch:7 in 11675)
[SchemaSimilarity], result of: 17.485245 = score(doc=11675,freq=1.0 =
termFreq=1.0 ), product of: 4.0 = boost 3.6432905 = idf, computed as log(1
+ (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 38829.0 = docFreq
1483961.0 = docCount 1.199825 = tfNorm, computed as (freq * (k1 + 1)) /
(freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 =
termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 11.993984 =
avgFieldLength 7.11 = fieldLength 37.095783 = weight(nameSearch:armour
in 11675) [SchemaSimilarity], result of: 37.095783 =
score(doc=11675,freq=1.0 = termFreq=1.0 ), product of: 4.0 = boost 7.729415
= idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5))
from: 652.0 = docFreq 1483961.0 = docCount 1.199825 = tfNorm, computed as
(freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength /
avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 =
parameter b 11.993984 = avgFieldLength 7.11 = fieldLength 22.929073 =
sum of: 6.5367765 = weight(descSearchNoSyn:7 in 11675) [SchemaSimilarity],
result of: 6.5367765 = score(doc=11675,freq=2.0 = termFreq=2.0 ), product
of: 2.0 = boost 2.2815151 = idf, computed as log(1 + (docCount - docFreq +
0.5) / (docFreq + 0.5)) from: 151552.0 = docFreq 1483926.0 = docCount
1.4325516 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b
* fieldLength / avgFieldLength)) from: 2.0 = termFreq=2.0 1.2 = parameter
k1 0.75 = parameter b 97.52203 = avgFieldLength 83.591835 = fieldLength
16.392298 = weight(descSearchNoSyn:armour in 

Re: Could not load collection from ZK:

2017-06-20 Thread Aman Deep Singh
This error is coming in the application which is using solrj to communicate
to the solr
full stacktrace is

Request processing failed; nested exception is com.gdn.solr620.org.apache.
solr.common.SolrException: Could not load collection from ZK:
productCollection
org.springframework.web.util.NestedServletException: Request processing
failed; nested exception is com.gdn.solr620.org.apache.solr.common.
SolrException: Could not load collection from ZK: productCollection at org.
springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet
.java:973) ~[spring-webmvc-4.1.0.RELEASE.jar:4.1.0.RELEASE] at org.
springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:
863) ~[spring-webmvc-4.1.0.RELEASE.jar:4.1.0.RELEASE] at javax.servlet.http.
HttpServlet.service(HttpServlet.java:648) ~[servlet-api.jar:na] at org.
springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:
837) ~[spring-webmvc-4.1.0.RELEASE.jar:4.1.0.RELEASE] at javax.servlet.http.
HttpServlet.service(HttpServlet.java:729) ~[servlet-api.jar:na] at org.
apache.catalina.core.ApplicationFilterChain.internalDoFilter(
ApplicationFilterChain.java:230) [catalina.jar:8.5.4] at org.apache.catalina
.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165) [
catalina.jar:8.5.4] at org.apache.tomcat.websocket.server.WsFilter.doFilter(
WsFilter.java:52) ~[tomcat-websocket.jar:8.5.4] at org.apache.catalina.core.
ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192) [
catalina.jar:8.5.4] at org.apache.catalina.core.ApplicationFilterChain.
doFilter(ApplicationFilterChain.java:165) [catalina.jar:8.5.4] at org.
springframework.web.filter.CharacterEncodingFilter.doFilterInternal(
CharacterEncodingFilter.java:88) ~[spring-web-4.1.0.RELEASE.jar:4.1.0.
RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(
OncePerRequestFilter.java:107) [spring-web-4.1.0.RELEASE.jar:4.1.0.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
ApplicationFilterChain.java:192) [catalina.jar:8.5.4] at org.apache.catalina
.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165) [
catalina.jar:8.5.4] at com.gdn.x.seoul.web.ui.filter.
ContentSecurityPolicyFilter.doFilter(ContentSecurityPolicyFilter.java:64) ~[
classes/:na] at org.apache.catalina.core.ApplicationFilterChain.
internalDoFilter(ApplicationFilterChain.java:192) [catalina.jar:8.5.4] at
org.apache.catalina.core.ApplicationFilterChain.doFilter(
ApplicationFilterChain.java:165) [catalina.jar:8.5.4] at com.gdn.x.seoul.web
.ui.filter.RedirectionFilter.doFilter(RedirectionFilter.java:61) [classes/:
na] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
ApplicationFilterChain.java:192) [catalina.jar:8.5.4] at org.apache.catalina
.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165) [
catalina.jar:8.5.4] at com.gdn.x.seoul.web.util.AccessLogFilter.doFilter(
AccessLogFilter.java:83) [seoul-common-web-4.11.3.jar:na] at org.apache.
catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain
.java:192) [catalina.jar:8.5.4] at org.apache.catalina.core.
ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165) [catalina.
jar:8.5.4] at org.springframework.security.web.FilterChainProxy$
VirtualFilterChain.doFilter(FilterChainProxy.java:330) [spring-security-web-
3.2.5.RELEASE.jar:3.2.5.RELEASE] at org.springframework.security.web.access.
intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:
118) [spring-security-web-3.2.5.RELEASE.jar:3.2.5.RELEASE] at org.
springframework.security.web.access.intercept.FilterSecurityInterceptor.
doFilter(FilterSecurityInterceptor.java:84) [spring-security-web-3.2.5.
RELEASE.jar:3.2.5.RELEASE] at org.springframework.security.web.
FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) [
spring-security-web-3.2.5.RELEASE.jar:3.2.5.RELEASE] at org.springframework.
security.web.access.ExceptionTranslationFilter.doFilter(
ExceptionTranslationFilter.java:113) [spring-security-web-3.2.5.RELEASE.jar:
3.2.5.RELEASE] at org.springframework.security.web.FilterChainProxy$
VirtualFilterChain.doFilter(FilterChainProxy.java:342) [spring-security-web-
3.2.5.RELEASE.jar:3.2.5.RELEASE] at org.springframework.security.web.
authentication.AnonymousAuthenticationFilter.doFilter(
AnonymousAuthenticationFilter.java:113) [spring-security-web-3.2.5.RELEASE.
jar:3.2.5.RELEASE] at org.springframework.security.web.FilterChainProxy$
VirtualFilterChain.doFilter(FilterChainProxy.java:342) [spring-security-web-
3.2.5.RELEASE.jar:3.2.5.RELEASE] at org.springframework.security.web.
servletapi.SecurityContextHolderAwareRequestFilter.doFilter(
SecurityContextHolderAwareRequestFilter.java:154) [spring-security-web-3.2.5
.RELEASE.jar:3.2.5.RELEASE]

Top command images are at
https://www.dropbox.com/sh/vxorykk8tmb6amb/AABYIcFuRyfSnlkS6I-Tr5HNa?dl=0


Could not load collection from ZK:

2017-06-20 Thread Aman Deep Singh
I'm facing a issue in solr sometimes zookeeper failes to load the solr
collection stating

org.apache.solr.common.SolrException: Could not load collection from ZK:

My current setup details is

   1. 5 Nodes  with 4 cores  ,7.6 GB RAM each which contains solr node and
   zookeeper
   2. No sharding is used
   3. index size is around 2.5 GB
   4. solrnode RAM-4GB
   5. Average load -10k RPM
   6. Indexing maximum 1000 docs/Minute  (around 100 batch of 5 to 10 docs
   each)

The GC logs analysis is also normal
Node 1-
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDYvMjAvLS1zb2xyX2djLmxvZy4yLnppcC0tMTEtNTctMw==
Node 2-
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDYvMjAvLS1zb2xyX2djLmxvZy4zLmN1cnJlbnQuemlwLS03LTQ2LTU2
Node 3-
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDYvMjAvLS1zb2xyX2djLmxvZy4zLmN1cnJlbnQuemlwLS04LTIzLTM5
Node 4-
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDYvMjAvLS1zb2xyX2djLmxvZy4yLnppcC0tOC0yMC01NQ==
Node 5-
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDYvMjAvLS1zb2xyX2djLmxvZy41LmN1cnJlbnQuemlwLS04LTE5LTE0


Admin UI image https://pasteboard.co/1Nd7ArAf0.png

Any idea what causing this problem and how to overcame from this.

Thanks,
Aman Deep Singh


Re: Facet is not working while querying with group

2017-06-20 Thread Aman Deep Singh
Again the same problem started to occur and I haven't change any schema
It's only coming to the Numeric data types only (tint,tdouble) and that too
in group query only
If I search with string field type it works fine.

Steps which i have followed

   1. drop the old collection
   2. create the new Collection
   3. Do the full reindex
   4. do atomic update on some fields multiple times




On Mon, Jun 19, 2017 at 8:55 PM Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: Is their any roadmap to avoid the remanent data issue,
>
> Not that I've ever heard of. Well, Uwe did show a process for adding
> docValues to an existing index here:
>
> http://lucene.472066.n3.nabble.com/Adding-Docvalues-to-a-Field-td4333503.html
> but you can see what kinds of deep-level Lucene knowledge are
> necessary.
>
> From my perspective, you'll _have_ to reindex multiple times as your
> app evolves. The product managers will want new capabilities for
> instance. The incoming data will change, whatever. You simply must be
> prepared for this.
>
> And you _really_ need to spend time up front getting the first cut
> right. I flat guarantee you'll re-index multiple times in the process.
> Unless and until you have a very thorough understanding of when it's
> really necessary, just completely blow away your index and re-index
> whenever you change you schema. Oh, there's one safe operation, adding
> a completely new fieldType or field. But _changing_ either one is
> chancy.
>
> Best,
> Erick
>
> On Mon, Jun 19, 2017 at 5:34 AM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
> > I tried to recreate the collection and its working fine,
> > But if i try to change the any field level value this error again comes
> > Is their any roadmap to avoid the remanent data issue, since every time
> you
> > change the field definition you need to delete the data directory or
> > recreate the collection.
> >
> > On Fri, Jun 16, 2017 at 11:51 PM Erick Erickson <erickerick...@gmail.com
> >
> > wrote:
> >
> >> bq: But I only changed the docvalues not the multivalued
> >>
> >> It's the same issue. There is remnant metadata when you change whether
> >> a field uses docValues or not. The error message can be ambiguous
> >> depending on where the issue is encountered.
> >>
> >> Best,
> >> Erick
> >>
> >> On Fri, Jun 16, 2017 at 9:28 AM, Aman Deep Singh
> >> <amandeep.coo...@gmail.com> wrote:
> >> > But I only changed the docvalues not the multivalued ,
> >> > Anyway I will try to reproduce this by deleting the entire data
> directory
> >> >
> >> > On 16-Jun-2017 9:52 PM, "Erick Erickson" <erickerick...@gmail.com>
> >> wrote:
> >> >
> >> >> bq: deleted entire index from the solr by delete by query command
> >> >>
> >> >> That's not what I meant. Either
> >> >> a> create an entirely new collection starting with the modified
> schema
> >> >> or
> >> >> b> shut down all your Solr instances. Go into each replica/core and
> >> >> 'rm -rf data'. Restart Solr.
> >> >>
> >> >> That way you're absolutely sure everything's gone.
> >> >>
> >> >> Best,
> >> >> Erick
> >> >>
> >> >> On Fri, Jun 16, 2017 at 9:10 AM, Aman Deep Singh
> >> >> <amandeep.coo...@gmail.com> wrote:
> >> >> > Yes ,it was a new schema(new collection),and after that I change
> only
> >> >> > docvalues= true using schema api,but before changing the schema I
> have
> >> >> > deleted entire index from the solr by delete by query command using
> >> admin
> >> >> > gui.
> >> >> >
> >> >> > On 16-Jun-2017 9:28 PM, "Erick Erickson" <erickerick...@gmail.com>
> >> >> wrote:
> >> >> >
> >> >> > My guess is you changed the definition of the field from
> >> >> > multiValued="true" to "false" at some point. Even if you re-index
> all
> >> >> > docs, some of the metadata can still be present.
> >> >> >
> >> >> > Did yo completely blow away the data? By that I mean remove the
> entire
> >> >> > data dir (i.e. the parent of the "index" directory) (stand alone)
> or
> >> >> > create a new collection (SolrCloud)?
> >> >> >
> >> >> > Best,
> >>

Re: Give boost only if entire value is present in Query

2017-06-19 Thread Aman Deep Singh
Sorry Rick,
I didn't get it.
How will you give boost on other field when querying on some other field.
Or if you query on copy keyword tokenized field then it only matched when
entire query is matched.


On 20-Jun-2017 1:52 AM, "Rick Leir" <rl...@leirtech.com> wrote:

Aman,
Use a copyfield so you can have a second field that uses a different
analysis chain. In the new field you just created for the copyfield, use
the lowercase type, or create a type using KeywordTokenizer in the analysis
chain. Then match on the original field, and boost based on the new field.
Cheers
Rick

On June 19, 2017 11:01:49 AM EDT, Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:
>Yes alessandro,
>I know  that their us some downsight of using sow =false but if don't
>use
>it then neither shingle nor bhram will work ,and these are required in
>my
>case/setup
>
>On 19-Jun-2017 8:18 PM, "alessandro.benedetti" <a.benede...@sease.io>
>wrote:
>
>Isn't this a case where you don't want the query parser to split by
>space
>before the analyser ?
>Take a look to the "sow" param for the edismax query parser.
>In your case you should be ok but Be aware that is not a silver bullet
>for
>everything and that other problems could arise in similar scenarios
>[1].
>
>The *sow* Parameter
>Split on whitespace: if set to false, whitespace-separated term
>sequences
>will be provided to text analysis in one shot, enabling proper function
>of
>analysis filters that operate over term sequences, e.g. multi-word
>synonyms
>and shingles. Defaults to true: text analysis is invoked separately for
>each
>individual whitespace-separated term.
>
>[1]
>http://lucene.472066.n3.nabble.com/The-downsides-of-
>not-splitting-on-whitespace-in-edismax-the-old-albino-
>elephant-prob-td4327440.html
>
>
>
>-
>---
>Alessandro Benedetti
>Search Consultant, R Software Engineer, Director
>Sease Ltd. - www.sease.io
>--
>View this message in context: http://lucene.472066.n3.
>nabble.com/Give-boost-only-if-entire-value-is-present-in-
>Query-tp4341714p4341735.html
>Sent from the Solr - User mailing list archive at Nabble.com.

--
Sorry for being brief. Alternate email is rickleir at yahoo dot com


Re: Give boost only if entire value is present in Query

2017-06-19 Thread Aman Deep Singh
Yes alessandro,
I know  that their us some downsight of using sow =false but if don't use
it then neither shingle nor bhram will work ,and these are required in my
case/setup

On 19-Jun-2017 8:18 PM, "alessandro.benedetti"  wrote:

Isn't this a case where you don't want the query parser to split by space
before the analyser ?
Take a look to the "sow" param for the edismax query parser.
In your case you should be ok but Be aware that is not a silver bullet for
everything and that other problems could arise in similar scenarios [1].

The *sow* Parameter
Split on whitespace: if set to false, whitespace-separated term sequences
will be provided to text analysis in one shot, enabling proper function of
analysis filters that operate over term sequences, e.g. multi-word synonyms
and shingles. Defaults to true: text analysis is invoked separately for each
individual whitespace-separated term.

[1]
http://lucene.472066.n3.nabble.com/The-downsides-of-
not-splitting-on-whitespace-in-edismax-the-old-albino-
elephant-prob-td4327440.html



-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: http://lucene.472066.n3.
nabble.com/Give-boost-only-if-entire-value-is-present-in-
Query-tp4341714p4341735.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Give boost only if entire value is present in Query

2017-06-19 Thread Aman Deep Singh
Yes Susheel ,
I know that more token matched gives more weight but in my case if entire
match I want around x times boost but in case of partial match I want to
give nominal boost or normal boost,
Now in case of keyword token ziet or phrase query they work if and only if
the user query is exactly matched to the index value but in my case
Suppose index term is ABC DEF
I Want to give more boost even if user query is ABC DEF XYZ and so on , so
normal keyword tokenized field not work


On 19-Jun-2017 7:46 PM, "Susheel Kumar" <susheel2...@gmail.com> wrote:

In general, the documents which has more or all terms matched against query
terms will be boosted higher per lucene tf/idf scoring.

So for document having ABC DEF queries like ABC DEF XYZ  or XYZ ABC DEF
will find a match(assuming q.op=AND)  and will be boosted higher compare to
documents with ABC or only DEF.

If you do not want match when user query is like ABC or ABC XYZ then you
have to use phrase queries or fields with keyword tokenizer etc.  and in
which case above queries will not work.

Thnx


On Mon, Jun 19, 2017 at 8:27 AM, Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:

> Hi,
> I have a problem ,I need to give the boost to a particular field if and
> only if the query contains entire field value (String contains like
> feature).
> e.g. if Field value is ABC DEF
> It should match if user query is like ABC DEF XYZ  or XYZ ABC DEF, But it
> should not match when user query is like ABC or ABC XYZ
> I'm using Solr-6.6.0
> also using edismax parser
>
> I tried creating the custom field like
>
>  enableGraphQueries="false">
> 
>   
>replace="all" replacement=""/>
>   
> 
> 
>   
>outputUnigrams="true" maxShingleSize="5" tokenSeparator=" "/>
>replace="all" replacement=""/>
>   
> 
>
>
> But it creating the synonyms query like (user query= 7 armour)
>
> +(((nameSearchNoSyn:7 nameSearchNoSyn:armour)~2)^9.0 | ((brandSearch:7
> brandSearch:armour)~2) |
>
>  ((nameSearch:7 nameSearch:armour)~2)^4.0 | (keywords:7 armour)^11.0 |
>
> ((descSearchNoSyn:7 descSearchNoSyn:armour)~2)^2.0 |
>
> *((Synonym(brandSearchQueryShingle:7 brandSearchQueryShingle:7armour)
> brandSearchQueryShingle:armour)~2)^10.0* |
>
> ((descriptionSearch:7 descriptionSearch:armour)~2) |
> (categoryKeywords:7 armour)^11.0) ((nameSearch:"7 armour"~5)^9.0 |
>
>  (brandSearch:"7 armour"~5)^8.0 | (descriptionSearch:"7
> armour"~5)^2.0) ((nameSearch:"7 armour")^9.0 |
>
> (descriptionSearch:"7 armour")^2.0)
>
>
> which again is not matching docs  ,
>
> Any idea how to boost the document if the user query contains exact
> value of that field
>
>
> my request handler is as
>
>
> 
> /browse
> solr.SearchHandler
> 
> explicit
> velocity
> true
> browse
> layout
> Solritas
> edismax
> 
> nameSearch^4 brandSearch *brandSearchQueryShingle*^10
> descriptionSearch categoryKeywords^11 keywords^11 nameSearchNoSyn^9
> descSearchNoSyn^2
> 
> 5
> 
> nameSearch^9 brandSearch^8 descriptionSearch^2 categoryKeywords^10
> keywords^10
> 
> nameSearch^9 descriptionSearch^2
> 0
> searchFields
> 100%
> *:*
> 10
> *,score
> 1
> after
> true
> false
> 5
> 2
> 5
> true
> true
> 5
> 10
> *false*
> 
> 
> spellcheck
> 
> 
>
>
> Thanks,
>
> Aman Deep Singh
>


Re: Facet is not working while querying with group

2017-06-19 Thread Aman Deep Singh
I tried to recreate the collection and its working fine,
But if i try to change the any field level value this error again comes
Is their any roadmap to avoid the remanent data issue, since every time you
change the field definition you need to delete the data directory or
recreate the collection.

On Fri, Jun 16, 2017 at 11:51 PM Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: But I only changed the docvalues not the multivalued
>
> It's the same issue. There is remnant metadata when you change whether
> a field uses docValues or not. The error message can be ambiguous
> depending on where the issue is encountered.
>
> Best,
> Erick
>
> On Fri, Jun 16, 2017 at 9:28 AM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
> > But I only changed the docvalues not the multivalued ,
> > Anyway I will try to reproduce this by deleting the entire data directory
> >
> > On 16-Jun-2017 9:52 PM, "Erick Erickson" <erickerick...@gmail.com>
> wrote:
> >
> >> bq: deleted entire index from the solr by delete by query command
> >>
> >> That's not what I meant. Either
> >> a> create an entirely new collection starting with the modified schema
> >> or
> >> b> shut down all your Solr instances. Go into each replica/core and
> >> 'rm -rf data'. Restart Solr.
> >>
> >> That way you're absolutely sure everything's gone.
> >>
> >> Best,
> >> Erick
> >>
> >> On Fri, Jun 16, 2017 at 9:10 AM, Aman Deep Singh
> >> <amandeep.coo...@gmail.com> wrote:
> >> > Yes ,it was a new schema(new collection),and after that I change only
> >> > docvalues= true using schema api,but before changing the schema I have
> >> > deleted entire index from the solr by delete by query command using
> admin
> >> > gui.
> >> >
> >> > On 16-Jun-2017 9:28 PM, "Erick Erickson" <erickerick...@gmail.com>
> >> wrote:
> >> >
> >> > My guess is you changed the definition of the field from
> >> > multiValued="true" to "false" at some point. Even if you re-index all
> >> > docs, some of the metadata can still be present.
> >> >
> >> > Did yo completely blow away the data? By that I mean remove the entire
> >> > data dir (i.e. the parent of the "index" directory) (stand alone) or
> >> > create a new collection (SolrCloud)?
> >> >
> >> > Best,
> >> > Erick
> >> >
> >> > On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
> >> > <amandeep.coo...@gmail.com> wrote:
> >> >> Hi,
> >> >> Facets are not working when i'm querying with group command
> >> >> request-
> >> >> facet.field=isBlibliShipping=true=true&
> >> > group.field=productCode=true=on=*:*=json
> >> >>
> >> >> Schema for facet field
> >> >>  multiValued=
> >> >> "false" indexed="true"stored="true"/>
> >> >>
> >> >> It was throwing error stating
> >> >> Type mismatch: isBlibliShipping was indexed with multiple values per
> >> >> document, use SORTED_SET instead
> >> >>
> >> >> The full stacktrace is attached as below
> >> >> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
> >> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> >> >> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
> >> >> path=/select
> >> >> params={q=*:*=isBlibliShipping=on&
> >> > group.facet=true=true=json=
> >> productCode&_=1497601224212&
> >> > group=true}
> >> >> hits=5346 status=500 QTime=29
> >> >> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
> >> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> >> >> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException:
> >> *Exception
> >> >> during facet.field: isBlibliShipping*
> >> >> at
> >> >> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
> >> > SimpleFacets.java:809)
> >> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >> >> at org.apache.solr.request.SimpleFacets$3.execute(
> >> > SimpleFacets.java:742)
> >> >> at
> >> >&g

Give boost only if entire value is present in Query

2017-06-19 Thread Aman Deep Singh
Hi,
I have a problem ,I need to give the boost to a particular field if and
only if the query contains entire field value (String contains like
feature).
e.g. if Field value is ABC DEF
It should match if user query is like ABC DEF XYZ  or XYZ ABC DEF, But it
should not match when user query is like ABC or ABC XYZ
I'm using Solr-6.6.0
also using edismax parser

I tried creating the custom field like



  
  
  


  
  
  
  



But it creating the synonyms query like (user query= 7 armour)

+(((nameSearchNoSyn:7 nameSearchNoSyn:armour)~2)^9.0 | ((brandSearch:7
brandSearch:armour)~2) |

 ((nameSearch:7 nameSearch:armour)~2)^4.0 | (keywords:7 armour)^11.0 |

((descSearchNoSyn:7 descSearchNoSyn:armour)~2)^2.0 |

*((Synonym(brandSearchQueryShingle:7 brandSearchQueryShingle:7armour)
brandSearchQueryShingle:armour)~2)^10.0* |

((descriptionSearch:7 descriptionSearch:armour)~2) |
(categoryKeywords:7 armour)^11.0) ((nameSearch:"7 armour"~5)^9.0 |

 (brandSearch:"7 armour"~5)^8.0 | (descriptionSearch:"7
armour"~5)^2.0) ((nameSearch:"7 armour")^9.0 |

(descriptionSearch:"7 armour")^2.0)


which again is not matching docs  ,

Any idea how to boost the document if the user query contains exact
value of that field


my request handler is as



/browse
solr.SearchHandler

explicit
velocity
true
browse
layout
Solritas
edismax

nameSearch^4 brandSearch *brandSearchQueryShingle*^10
descriptionSearch categoryKeywords^11 keywords^11 nameSearchNoSyn^9
descSearchNoSyn^2

5

nameSearch^9 brandSearch^8 descriptionSearch^2 categoryKeywords^10 keywords^10

nameSearch^9 descriptionSearch^2
0
searchFields
100%
*:*
10
*,score
1
after
true
false
5
2
5
true
true
5
10
*false*


spellcheck




Thanks,

Aman Deep Singh


Re: Facet is not working while querying with group

2017-06-16 Thread Aman Deep Singh
But I only changed the docvalues not the multivalued ,
Anyway I will try to reproduce this by deleting the entire data directory

On 16-Jun-2017 9:52 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:

> bq: deleted entire index from the solr by delete by query command
>
> That's not what I meant. Either
> a> create an entirely new collection starting with the modified schema
> or
> b> shut down all your Solr instances. Go into each replica/core and
> 'rm -rf data'. Restart Solr.
>
> That way you're absolutely sure everything's gone.
>
> Best,
> Erick
>
> On Fri, Jun 16, 2017 at 9:10 AM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
> > Yes ,it was a new schema(new collection),and after that I change only
> > docvalues= true using schema api,but before changing the schema I have
> > deleted entire index from the solr by delete by query command using admin
> > gui.
> >
> > On 16-Jun-2017 9:28 PM, "Erick Erickson" <erickerick...@gmail.com>
> wrote:
> >
> > My guess is you changed the definition of the field from
> > multiValued="true" to "false" at some point. Even if you re-index all
> > docs, some of the metadata can still be present.
> >
> > Did yo completely blow away the data? By that I mean remove the entire
> > data dir (i.e. the parent of the "index" directory) (stand alone) or
> > create a new collection (SolrCloud)?
> >
> > Best,
> > Erick
> >
> > On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
> > <amandeep.coo...@gmail.com> wrote:
> >> Hi,
> >> Facets are not working when i'm querying with group command
> >> request-
> >> facet.field=isBlibliShipping=true=true&
> > group.field=productCode=true=on=*:*=json
> >>
> >> Schema for facet field
> >>  >> "false" indexed="true"stored="true"/>
> >>
> >> It was throwing error stating
> >> Type mismatch: isBlibliShipping was indexed with multiple values per
> >> document, use SORTED_SET instead
> >>
> >> The full stacktrace is attached as below
> >> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> >> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
> >> path=/select
> >> params={q=*:*=isBlibliShipping=on&
> > group.facet=true=true=json=
> productCode&_=1497601224212&
> > group=true}
> >> hits=5346 status=500 QTime=29
> >> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> >> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException:
> *Exception
> >> during facet.field: isBlibliShipping*
> >> at
> >> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
> > SimpleFacets.java:809)
> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >> at org.apache.solr.request.SimpleFacets$3.execute(
> > SimpleFacets.java:742)
> >> at
> >> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
> > SimpleFacets.java:818)
> >> at
> >> org.apache.solr.handler.component.FacetComponent.
> > getFacetCounts(FacetComponent.java:330)
> >> at
> >> org.apache.solr.handler.component.FacetComponent.
> > process(FacetComponent.java:274)
> >> at
> >> org.apache.solr.handler.component.SearchHandler.handleRequestBody(
> > SearchHandler.java:296)
> >> at
> >> org.apache.solr.handler.RequestHandlerBase.handleRequest(
> > RequestHandlerBase.java:173)
> >> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> >> at org.apache.solr.servlet.HttpSolrCall.execute(
> HttpSolrCall.java:723)
> >> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> >> at
> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:361)
> >> at
> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:305)
> >> at
> >> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > doFilter(ServletHandler.java:1691)
> >> at
> >> org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:582)
> >> at
> >> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > Sco

Re: Facet is not working while querying with group

2017-06-16 Thread Aman Deep Singh
Yes ,it was a new schema(new collection),and after that I change only
docvalues= true using schema api,but before changing the schema I have
deleted entire index from the solr by delete by query command using admin
gui.

On 16-Jun-2017 9:28 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:

My guess is you changed the definition of the field from
multiValued="true" to "false" at some point. Even if you re-index all
docs, some of the metadata can still be present.

Did yo completely blow away the data? By that I mean remove the entire
data dir (i.e. the parent of the "index" directory) (stand alone) or
create a new collection (SolrCloud)?

Best,
Erick

On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
<amandeep.coo...@gmail.com> wrote:
> Hi,
> Facets are not working when i'm querying with group command
> request-
> facet.field=isBlibliShipping=true=true&
group.field=productCode=true=on=*:*=json
>
> Schema for facet field
>  "false" indexed="true"stored="true"/>
>
> It was throwing error stating
> Type mismatch: isBlibliShipping was indexed with multiple values per
> document, use SORTED_SET instead
>
> The full stacktrace is attached as below
> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
> path=/select
> params={q=*:*=isBlibliShipping=on&
group.facet=true=true=json=productCode&_=1497601224212&
group=true}
> hits=5346 status=500 QTime=29
> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: *Exception
> during facet.field: isBlibliShipping*
> at
> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
SimpleFacets.java:809)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.apache.solr.request.SimpleFacets$3.execute(
SimpleFacets.java:742)
> at
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
SimpleFacets.java:818)
> at
> org.apache.solr.handler.component.FacetComponent.
getFacetCounts(FacetComponent.java:330)
> at
> org.apache.solr.handler.component.FacetComponent.
process(FacetComponent.java:274)
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(
SearchHandler.java:296)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(
RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:361)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:305)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
doFilter(ServletHandler.java:1691)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(
SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.
doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
doHandle(ContextHandler.java:1180)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at
> org.eclipse.jetty.server.session.SessionHandler.
doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
doScope(ContextHandler.java:1112)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.
handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
> at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(
RewriteHandler.java:335)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(
HttpConnection.java:251)
> at
> org.eclipse.jetty.io.AbstractConnection$Rea

Re: Possible bug in Solrj-6.6.0

2017-06-16 Thread Aman Deep Singh
Thanks Joel,
It is working now
One quick question,as you say that we can use solr client cache multiple
time so can I create a single instance of solr client cache and use it
again and again ,since we are using one single bean for client object.


On 16-Jun-2017 6:28 PM, "Joel Bernstein" <joels...@gmail.com> wrote:

The issue is that in 6.6 CloudSolrStream is expecting a StreamContext to be
set. So you'll need to update your code to do this. This was part of
changes made to make streaming work in non-SolrCloud environments.

You also need to create a SolrClientCache which caches the SolrClients.

Example:

SolrClientCache cache = new SolrClientCache();

StreamContext streamContext = new StreamContext();

streamContext.setSolrClientCache(cache);

CloudSolrStream stream = new CloudSolrStream(...);
stream.setStreamContext(streamContext);
stream.open();


The SolrClientCache can be shared by multiple requests and should be closed
when the application exits.






















Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Jun 16, 2017 at 2:17 AM, Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:

> Hi,
> I think their is a possible bug in Solrj version 6.6.0 ,as streaming is
not
> working
> as i have a piece of code
>
> public Set getAllIds(String requestId, String field) {
> LOG.info("Now Trying to fetch all the ids from SOLR for request Id
> {}", requestId);
> Map props = new HashMap();
> props.put("q", field + ":*");
> props.put("qt", "/export");
> props.put("sort", field + " asc");
> props.put("fl", field);
> Set idSet = new HashSet<>();
> try (CloudSolrStream cloudSolrStream = new
> CloudSolrStream(cloudSolrClient.getZkHost(),
> cloudSolrClient.getDefaultCollection(), new
> MapSolrParams(props))) {
> cloudSolrStream.open();
> while (true) {
> Tuple tuple = cloudSolrStream.read();
> if (tuple.EOF) {
> break;
> }
> idSet.add(tuple.getString(field));
> }
> return idSet;
> } catch (IOException ex) {
> LOG.error("Error while fetching the ids from SOLR for request
> Id {} ", requestId, ex);
> }
> return Collections.emptySet();
> }
>
>
> This is working in the Solrj 6.5.1 but now it start throwing Error
> after upgrading to solrj-6.6.0
>
> java.io.IOException: java.lang.NullPointerException
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> constructStreams(CloudSolrStream.java:408)
> ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> - ishan - 2017-05-30 07:32:54]
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> open(CloudSolrStream.java:299)
> ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> - ishan - 2017-05-30 07:32:54]
>
>
> Thanks,
>
> Aman Deep Singh
>


Facet is not working while querying with group

2017-06-16 Thread Aman Deep Singh
:319)
at org.apache.lucene.index.DocValues.getSorted(DocValues.java:262)
at
org.apache.lucene.search.grouping.term.TermGroupFacetCollector$SV.doSetNextReader(TermGroupFacetCollector.java:129)
at
org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33)
at
org.apache.solr.request.SimpleFacets$2.getLeafCollector(SimpleFacets.java:730)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:659)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:472)
at
org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:692)
at
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:476)
at
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:405)
at
org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:803)
... 39 more

However if I try to query without grouping it is working fine.
Any Idea how to fix this.

Thanks,
Aman Deep Singh


Possible bug in Solrj-6.6.0

2017-06-16 Thread Aman Deep Singh
Hi,
I think their is a possible bug in Solrj version 6.6.0 ,as streaming is not
working
as i have a piece of code

public Set getAllIds(String requestId, String field) {
LOG.info("Now Trying to fetch all the ids from SOLR for request Id
{}", requestId);
Map props = new HashMap();
props.put("q", field + ":*");
props.put("qt", "/export");
props.put("sort", field + " asc");
props.put("fl", field);
Set idSet = new HashSet<>();
try (CloudSolrStream cloudSolrStream = new
CloudSolrStream(cloudSolrClient.getZkHost(),
cloudSolrClient.getDefaultCollection(), new MapSolrParams(props))) {
cloudSolrStream.open();
while (true) {
Tuple tuple = cloudSolrStream.read();
if (tuple.EOF) {
break;
}
idSet.add(tuple.getString(field));
}
return idSet;
} catch (IOException ex) {
LOG.error("Error while fetching the ids from SOLR for request
Id {} ", requestId, ex);
}
return Collections.emptySet();
}


This is working in the Solrj 6.5.1 but now it start throwing Error
after upgrading to solrj-6.6.0

java.io.IOException: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:408)
~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
- ishan - 2017-05-30 07:32:54]
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:299)
~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
- ishan - 2017-05-30 07:32:54]


Thanks,

Aman Deep Singh


Re: Phrase Query only forward direction

2017-06-12 Thread Aman Deep Singh
Thanks Eric

On Mon, Jun 12, 2017 at 10:28 PM Erick Erickson <erickerick...@gmail.com>
wrote:

> Complex phrase also has an inorder flag that I think you're looking for
> here.
>
> Best,
> Erick
>
> On Mon, Jun 12, 2017 at 7:16 AM, Erik Hatcher <erik.hatc...@gmail.com>
> wrote:
> > Understood.   If you need ordered, “sloppy” (some distance) phrases, you
> could OR in a {!complexphrase} query.
> >
> >
> https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-ComplexPhraseQueryParser
> <
> https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-ComplexPhraseQueryParser
> >
> >
> > Something like:
> >
> > q=({!edismax … ps=0 v=$qq}) OR {!complexphrase df=nameSearch v=$qq}
> >
> > where =12345 masitha
> >
> > Erik
> >
> >
> >> On Jun 12, 2017, at 9:57 AM, Aman Deep Singh <amandeep.coo...@gmail.com>
> wrote:
> >>
> >> Yes Erik I can use ps=0 but, my problem is that I want phrase which have
> >> same sequence and they can be present with in some distance
> >> E.g.
> >> If I have document masitha xyz 12345
> >> I want that to be boosted since the sequence is in order .That's why I
> have
> >> use ps=5
> >> Thanks,
> >> Aman Deep Singh
> >>
> >> On 12-Jun-2017 5:44 PM, "Erik Hatcher" <erik.hatc...@gmail.com> wrote:
> >>
> >> Using ps=5 causes the phrase matching to be unordered matching.   You’ll
> >> have to set ps=0, if using edismax, to get exact order phrase matches.
> >>
> >>Erik
> >>
> >>
> >>> On Jun 12, 2017, at 1:09 AM, Aman Deep Singh <
> amandeep.coo...@gmail.com>
> >> wrote:
> >>>
> >>> Hi,
> >>> I'm using a phrase query ,but it was applying the phrase boost to the
> >> query
> >>> where terms are in reverse order also ,which i don't want.Is their any
> way
> >>> to avoid the phrase boost for reverse order and apply boost only in
> case
> >> of
> >>> terms are in same sequence
> >>>
> >>> Solr version 6.5.1
> >>>
> >>> e.g.
> >>> http://localhost:8983/solr/l4_collection/select?debugQuery=o
> >> n=edismax=score,nameSearch=on=100%25&
> >> pf=nameSearch=12345%20masitha=nameSearch=xml=5
> >>>
> >>>
> >>> while my document has value
> >>>
> >>> in the debug query it is applying boost as
> >>> 23.28365 = sum of:
> >>> 15.112219 = sum of:
> >>> 9.669338 = weight(nameSearch:12345 in 0) [SchemaSimilarity], result of:
> >>> 9.669338 = score(doc=0,freq=1.0 = termFreq=1.0
> >>> ), product of:
> >>> 7.6397386 = idf, computed as log(1 + (docCount - docFreq + 0.5) /
> (docFreq
> >>> + 0.5)) from:
> >>> 2.0 = docFreq
> >>> 5197.0 = docCount
> >>> 1.2656635 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 -
> b +
> >> b
> >>> * fieldLength / avgFieldLength)) from:
> >>> 1.0 = termFreq=1.0
> >>> 1.2 = parameter k1
> >>> 0.75 = parameter b
> >>> 5.2576485 = avgFieldLength
> >>> 2.56 = fieldLength
> >>> 5.44288 = weight(nameSearch:masitha in 0) [SchemaSimilarity], result
> of:
> >>> 5.44288 = score(doc=0,freq=1.0 = termFreq=1.0
> >>> ), product of:
> >>> 4.3004165 = idf, computed as log(1 + (docCount - docFreq + 0.5) /
> (docFreq
> >>> + 0.5)) from:
> >>> 70.0 = docFreq
> >>> 5197.0 = docCount
> >>> 1.2656635 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 -
> b +
> >> b
> >>> * fieldLength / avgFieldLength)) from:
> >>> 1.0 = termFreq=1.0
> >>> 1.2 = parameter k1
> >>> 0.75 = parameter b
> >>> 5.2576485 = avgFieldLength
> >>> 2.56 = fieldLength
> >>> 8.171431 = weight(*nameSearch:"12345 masitha"~5 *in 0)
> [SchemaSimilarity],
> >>> result of:
> >>> 8.171431 = score(doc=0,freq=0.3334 = phraseFreq=0.3334
> >>> ), product of:
> >>> 11.940155 = idf(), sum of:
> >>> 7.6397386 = idf, computed as log(1 + (docCount - docFreq + 0.5) /
> (docFreq
> >>> + 0.5)) from:
> >>> 2.0 = docFreq
> >>> 5197.0 = docCount
> >>> 4.3004165 = idf, computed as log(1 + (docCount - docFreq + 0.5) /
> (docFreq
> >>> + 0.5)) from:
> >>> 70.0 = docFreq
> >>> 5197.0 = docCount
> >>> 0.6843655 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 -
> b +
> >> b
> >>> * fieldLength / avgFieldLength)) from:
> >>> 0.3334 = phraseFreq=0.3334
> >>> 1.2 = parameter k1
> >>> 0.75 = parameter b
> >>> 5.2576485 = avgFieldLength
> >>> 2.56 = fieldLength
> >>>
> >>> Thanks,
> >>> Aman Deep Singh
> >
>


Re: Phrase Query only forward direction

2017-06-12 Thread Aman Deep Singh
Yes Erik I can use ps=0 but, my problem is that I want phrase which have
same sequence and they can be present with in some distance
E.g.
If I have document masitha xyz 12345
I want that to be boosted since the sequence is in order .That's why I have
use ps=5
Thanks,
Aman Deep Singh

On 12-Jun-2017 5:44 PM, "Erik Hatcher" <erik.hatc...@gmail.com> wrote:

Using ps=5 causes the phrase matching to be unordered matching.   You’ll
have to set ps=0, if using edismax, to get exact order phrase matches.

Erik


> On Jun 12, 2017, at 1:09 AM, Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:
>
> Hi,
> I'm using a phrase query ,but it was applying the phrase boost to the
query
> where terms are in reverse order also ,which i don't want.Is their any way
> to avoid the phrase boost for reverse order and apply boost only in case
of
> terms are in same sequence
>
> Solr version 6.5.1
>
> e.g.
> http://localhost:8983/solr/l4_collection/select?debugQuery=o
n=edismax=score,nameSearch=on=100%25&
pf=nameSearch=12345%20masitha=nameSearch=xml=5
>
>
> while my document has value
>
> in the debug query it is applying boost as
> 23.28365 = sum of:
> 15.112219 = sum of:
> 9.669338 = weight(nameSearch:12345 in 0) [SchemaSimilarity], result of:
> 9.669338 = score(doc=0,freq=1.0 = termFreq=1.0
> ), product of:
> 7.6397386 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
> + 0.5)) from:
> 2.0 = docFreq
> 5197.0 = docCount
> 1.2656635 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b +
b
> * fieldLength / avgFieldLength)) from:
> 1.0 = termFreq=1.0
> 1.2 = parameter k1
> 0.75 = parameter b
> 5.2576485 = avgFieldLength
> 2.56 = fieldLength
> 5.44288 = weight(nameSearch:masitha in 0) [SchemaSimilarity], result of:
> 5.44288 = score(doc=0,freq=1.0 = termFreq=1.0
> ), product of:
> 4.3004165 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
> + 0.5)) from:
> 70.0 = docFreq
> 5197.0 = docCount
> 1.2656635 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b +
b
> * fieldLength / avgFieldLength)) from:
> 1.0 = termFreq=1.0
> 1.2 = parameter k1
> 0.75 = parameter b
> 5.2576485 = avgFieldLength
> 2.56 = fieldLength
> 8.171431 = weight(*nameSearch:"12345 masitha"~5 *in 0) [SchemaSimilarity],
> result of:
> 8.171431 = score(doc=0,freq=0.3334 = phraseFreq=0.3334
> ), product of:
> 11.940155 = idf(), sum of:
> 7.6397386 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
> + 0.5)) from:
> 2.0 = docFreq
> 5197.0 = docCount
> 4.3004165 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
> + 0.5)) from:
> 70.0 = docFreq
> 5197.0 = docCount
> 0.6843655 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b +
b
> * fieldLength / avgFieldLength)) from:
> 0.3334 = phraseFreq=0.3334
> 1.2 = parameter k1
> 0.75 = parameter b
> 5.2576485 = avgFieldLength
> 2.56 = fieldLength
>
> Thanks,
> Aman Deep Singh


Phrase Query only forward direction

2017-06-11 Thread Aman Deep Singh
Hi,
I'm using a phrase query ,but it was applying the phrase boost to the query
where terms are in reverse order also ,which i don't want.Is their any way
to avoid the phrase boost for reverse order and apply boost only in case of
terms are in same sequence

Solr version 6.5.1

e.g.
http://localhost:8983/solr/l4_collection/select?debugQuery=on=edismax=score,nameSearch=on=100%25=nameSearch=12345%20masitha=nameSearch=xml=5


while my document has value masitha 12345

 in the debug query it is applying boost as
23.28365 = sum of:
15.112219 = sum of:
9.669338 = weight(nameSearch:12345 in 0) [SchemaSimilarity], result of:
9.669338 = score(doc=0,freq=1.0 = termFreq=1.0
), product of:
7.6397386 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
+ 0.5)) from:
2.0 = docFreq
5197.0 = docCount
1.2656635 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b
* fieldLength / avgFieldLength)) from:
1.0 = termFreq=1.0
1.2 = parameter k1
0.75 = parameter b
5.2576485 = avgFieldLength
2.56 = fieldLength
5.44288 = weight(nameSearch:masitha in 0) [SchemaSimilarity], result of:
5.44288 = score(doc=0,freq=1.0 = termFreq=1.0
), product of:
4.3004165 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
+ 0.5)) from:
70.0 = docFreq
5197.0 = docCount
1.2656635 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b
* fieldLength / avgFieldLength)) from:
1.0 = termFreq=1.0
1.2 = parameter k1
0.75 = parameter b
5.2576485 = avgFieldLength
2.56 = fieldLength
8.171431 = weight(*nameSearch:"12345 masitha"~5 *in 0) [SchemaSimilarity],
result of:
8.171431 = score(doc=0,freq=0.3334 = phraseFreq=0.3334
), product of:
11.940155 = idf(), sum of:
7.6397386 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
+ 0.5)) from:
2.0 = docFreq
5197.0 = docCount
4.3004165 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq
+ 0.5)) from:
70.0 = docFreq
5197.0 = docCount
0.6843655 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b
* fieldLength / avgFieldLength)) from:
0.3334 = phraseFreq=0.3334
1.2 = parameter k1
0.75 = parameter b
5.2576485 = avgFieldLength
2.56 = fieldLength

Thanks,
Aman Deep Singh


Re: Solr Atomic Document update Conditional

2017-05-18 Thread Aman Deep Singh
Hi Shawn,
Solr optimistic concurrency is worked fine only for 1 field
But in my case two or more field can be updated at a same time
But one field can not be updated if its corresponding timestamp is greater
than request time


On 18-May-2017 6:15 PM, "Shawn Heisey" <apa...@elyograg.org> wrote:

On 5/18/2017 2:05 AM, Aman Deep Singh wrote:
> Is their any way to do the SOLR atomic update based on some condition
> Suppose in my SOLR schema i have some fields
>
>1. field1
>2. field2
>3. field1_timestamp
>4. field2_timestamp
>
> Now i have to update value of field1 only if field1_timestamp is less then
> the provided timestamp
> I found a SOLR thread for same question
> http://lucene.472066.n3.nabble.com/Conditional-atomic-
update-td4277224.html
> but it doesn't contains any solution

I have never heard of anything like this.

If you want to write your own custom update processor that looks for the
condition and removes the appropriate atomic update command(s) from the
request, or even completely aborts the request, you can certainly do
so.  That seems to be what was suggested by the author of the thread
that you referenced.

What you have described sounds a little bit like optimistic concurrency,
something Solr supports out of the box.  The feature isn't identical to
what you described, but maybe you can adapt it.  Automatically assigned
_version_ values are derived from a java timestamp.

http://yonik.com/solr/optimistic-concurrency/

Thanks,
Shawn


Solr Atomic Document update Conditional

2017-05-18 Thread Aman Deep Singh
Hi ,
Is their any way to do the SOLR atomic update based on some condition
Suppose in my SOLR schema i have some fields

   1. field1
   2. field2
   3. field1_timestamp
   4. field2_timestamp

Now i have to update value of field1 only if field1_timestamp is less then
the provided timestamp
I found a SOLR thread for same question
http://lucene.472066.n3.nabble.com/Conditional-atomic-update-td4277224.html
but it doesn't contains any solution


Thanks,
Aman Deep Singh


Re: Automatic conversion to Range Query

2017-05-11 Thread Aman Deep Singh
Yes hoss,
 it only convert to the range query when there is two token only,,BTW
thanks for raising the issue

On 11-May-2017 5:38 AM, "Chris Hostetter"  wrote:

> : I'm facing a issue when i'm querying the Solr
> : my query is "xiomi Mi 5 -white [64GB/ 3GB]"
> ...
> : +(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nameSearch:mi)
> : (nameSearch:5) -(Synonym(nameSearch:putih
> : nameSearch:white))*(nameSearch:[64gb/ TO 3gb])*)~4)
> ...
> : Now due to automatic conversion of query  to Range query i'm not able
> : to find the result
> ...
> : Solr Version-6.4.2
> : Parser- edismax
>
> That's really suprising to me -- but i can reproduce what you're
> describing ... not sure if the "implicit" assumption thta you wanted a
> range query is intentional or a bug -- but it's certainly weird so i've
> file a jira: https://issues.apache.org/jira/browse/LUCENE-7821
>
> FWIW: It's not actaully anything special about edismax that's causing that
> to be parsed as a range query -- it seems that the underlying grammer
> (used by both the lucene & edismax solr QParsers) treats the "TO" as
> optional in a range query, so the remaining 2 "terms" inside the square
> brackets are considered the low/high ... if you'd had more then 2 terms
> (ie: "foo [64gb/ 3gb bar]") it wouldn't have parsed as a range query --
> which means edismax would have fallen back to rerying to parse it with
> automatic escaping.
>
>
>
> -Hoss
> http://www.lucidworks.com/
>


Re: Automatic conversion to Range Query

2017-05-07 Thread Aman Deep Singh
Yes Rick,
User is actually typing this type of queries ,this was a random user query
pick from access logs


On 07-May-2017 7:29 PM, "Rick Leir" <rl...@leirtech.com> wrote:

Hi Aman,
Is the user actually entering that query? It seems unlikely. Perhaps you
have a form selector for various Apple products. Could you not have an
enumerated type for the products, and simplify everything? I must be
missing something here. Cheers -- Rick

On May 6, 2017 8:38:14 AM EDT, Shawn Heisey <apa...@elyograg.org> wrote:
>On 5/5/2017 12:42 PM, Aman Deep Singh wrote:
>> Hi Erick, I don't want to do the range query , That is why I'm using
>> the pattern replace filter to remove all the non alphanumeric to
>space
>> so that this type of situation don't arrive,Since end user can query
>> anything, also in the query I haven't mention any range related
>> keyword (TO). If my query is like [64GB/3GB] it works fine and
>doesn't
>> convert to range query.
>
>I hope I'm headed in the right direction here.
>
>Square brackets are special characters to the query parser -- they are
>typically used to specify a range query.  It's a little odd that Solr
>would add the "TO" for you like it seems to be doing, but not REALLY
>surprising.  This would be happening *before* the parts of the query
>make it to your analysis chain where you have the pattern replace
>filter.
>
>If you want to NOT have special characters perform their special
>function, but actually become part of the query, you'll need to escape
>them with a backslash.  Escaping all the special characters in your
>query yields this query:
>
>xiomi Mi 5 \-white \[64GB\/ 3GB\]
>
>It's difficult to decide whether the dash character before "white" was
>intended as a "NOT" operator or to be part of the query.  You might not
>want to escape that one.
>
>Thanks,
>Shawn

--
Sorry for being brief. Alternate email is rickleir at yahoo dot com


Re: Automatic conversion to Range Query

2017-05-06 Thread Aman Deep Singh
Hi Erik,
We can't use dismax as we are using the other functionality of edismax
parser

On 07-May-2017 12:13 AM, "Erik Hatcher" <erik.hatc...@gmail.com> wrote:

What about dismax instead of edismax?It might do the righter thing here
without escaping.

> On May 6, 2017, at 12:57, Shawn Heisey <apa...@elyograg.org> wrote:
>
>> On 5/6/2017 7:09 AM, Aman Deep Singh wrote:
>> After escaping the square bracket the query is working fine, Is their
>> any way in the parser to avoid the automatic conversion if not proper
>> query will be passed like in my case even though I haven't passed
>> proper range query (with keyword TO).
>
> If you use characters special to the query parser but don't want them
> acted on by the query parser, then they need to be escaped.  That's just
> how things work, and it's not going to change.
>
> Thanks,
> Shawn
>


Re: Automatic conversion to Range Query

2017-05-06 Thread Aman Deep Singh
Thanks Shawn,
After escaping the square bracket the query is working fine,
Is their any way in the parser to avoid the automatic conversion if not
proper query will be passed like in my case even though I haven't passed
proper range query (with keyword TO).



On 06-May-2017 6:08 PM, "Shawn Heisey" <apa...@elyograg.org> wrote:

On 5/5/2017 12:42 PM, Aman Deep Singh wrote:
> Hi Erick, I don't want to do the range query , That is why I'm using
> the pattern replace filter to remove all the non alphanumeric to space
> so that this type of situation don't arrive,Since end user can query
> anything, also in the query I haven't mention any range related
> keyword (TO). If my query is like [64GB/3GB] it works fine and doesn't
> convert to range query.

I hope I'm headed in the right direction here.

Square brackets are special characters to the query parser -- they are
typically used to specify a range query.  It's a little odd that Solr
would add the "TO" for you like it seems to be doing, but not REALLY
surprising.  This would be happening *before* the parts of the query
make it to your analysis chain where you have the pattern replace filter.

If you want to NOT have special characters perform their special
function, but actually become part of the query, you'll need to escape
them with a backslash.  Escaping all the special characters in your
query yields this query:

xiomi Mi 5 \-white \[64GB\/ 3GB\]

It's difficult to decide whether the dash character before "white" was
intended as a "NOT" operator or to be part of the query.  You might not
want to escape that one.

Thanks,
Shawn


Re: Automatic conversion to Range Query

2017-05-05 Thread Aman Deep Singh
I'm using a custom request handler with deftype as edismax
My query is -
xiomi Mi 5 -white [64GB/ 3GB]


On 06-May-2017 12:48 AM, "Erick Erickson" <erickerick...@gmail.com> wrote:

OK, what _request handler_ are you using? what is the original query?

On Fri, May 5, 2017 at 11:42 AM, Aman Deep Singh
<amandeep.coo...@gmail.com> wrote:
> Hi Erick,
> I don't want to do the range query ,
> That is why I'm using the pattern replace filter to remove all the non
> alphanumeric to space so that this type of situation don't arrive,Since
end
> user can query anything, also in the query I haven't mention any range
> related keyword (TO).
> If my query is like [64GB/3GB] it works fine and doesn't convert to range
> query.
>
> Thanks,
> Aman deep singh
>
> On 06-May-2017 12:04 AM, "Erick Erickson" <erickerick...@gmail.com> wrote:
>
> I'm going to go a little sideways and claim this is an "XY" problem,
> the range bits are a side-issue. The problem is that you're trying to
> do ranges on textual data that are really numbers. So even if there's
> a way to fix the range issue you're talking about, it still won't do
> what you expect.
>
> Consider
> [300 TO 4] is perfectly valid for _character_ based data. At least
> it'll match values like 31, 32, 39. That's not what a numeric sort
> would expect though. If you really want to search on numeric ranges,
> you'll have to split the value out to something that's really numeric.
>
> Best,
> Erick
>
> On Thu, May 4, 2017 at 10:55 PM, Aman Deep Singh
> <amandeep.coo...@gmail.com> wrote:
>> Hi,
>> I'm facing a issue when i'm querying the Solr
>> my query is "xiomi Mi 5 -white [64GB/ 3GB]"
>> while my search field definition is
>>
>>   > autoGeneratePhraseQueries="false" positionIncrementGap="100">
>> 
>>   
>>   > pattern="[^\dA-Za-z ]" replacement=" "/>
>>   > catenateNumbers="1" generateNumberParts="1" splitOnCaseChange="1"
>> generateWordParts="1" preserveOriginal="1" catenateAll="1"
>> catenateWords="1"/>
>>   
>> 
>> 
>>   
>>managed="synonyms_gdn"/>
>>   > pattern="[^\dA-Za-z _]" replacement=" "/>
>>   > catenateNumbers="0" generateNumberParts="1" splitOnCaseChange="1"
>> generateWordParts="1" splitOnNumerics="1" preserveOriginal="0"
>> catenateAll="0" catenateWords="0"/>
>>   
>> 
>>   
>>
>>
>> My generated query is
>>
>>
>> +(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nameSearch:mi)
>> (nameSearch:5) -(Synonym(nameSearch:putih
>> nameSearch:white))*(nameSearch:[64gb/ TO 3gb])*)~4)
>>
>>
>> Now due to automatic conversion of query  to Range query i'm not able
>> to find the result
>>
>>
>> Solr Version-6.4.2
>>
>> Parser- edismax
>>
>> Thanks,
>>
>> Aman Deep Singh


Re: Automatic conversion to Range Query

2017-05-05 Thread Aman Deep Singh
Hi Erick,
I don't want to do the range query ,
That is why I'm using the pattern replace filter to remove all the non
alphanumeric to space so that this type of situation don't arrive,Since end
user can query anything, also in the query I haven't mention any range
related keyword (TO).
If my query is like [64GB/3GB] it works fine and doesn't convert to range
query.

Thanks,
Aman deep singh

On 06-May-2017 12:04 AM, "Erick Erickson" <erickerick...@gmail.com> wrote:

I'm going to go a little sideways and claim this is an "XY" problem,
the range bits are a side-issue. The problem is that you're trying to
do ranges on textual data that are really numbers. So even if there's
a way to fix the range issue you're talking about, it still won't do
what you expect.

Consider
[300 TO 4] is perfectly valid for _character_ based data. At least
it'll match values like 31, 32, 39. That's not what a numeric sort
would expect though. If you really want to search on numeric ranges,
you'll have to split the value out to something that's really numeric.

Best,
Erick

On Thu, May 4, 2017 at 10:55 PM, Aman Deep Singh
<amandeep.coo...@gmail.com> wrote:
> Hi,
> I'm facing a issue when i'm querying the Solr
> my query is "xiomi Mi 5 -white [64GB/ 3GB]"
> while my search field definition is
>
>autoGeneratePhraseQueries="false" positionIncrementGap="100">
> 
>   
>pattern="[^\dA-Za-z ]" replacement=" "/>
>catenateNumbers="1" generateNumberParts="1" splitOnCaseChange="1"
> generateWordParts="1" preserveOriginal="1" catenateAll="1"
> catenateWords="1"/>
>   
> 
> 
>   
>   
>pattern="[^\dA-Za-z _]" replacement=" "/>
>catenateNumbers="0" generateNumberParts="1" splitOnCaseChange="1"
> generateWordParts="1" splitOnNumerics="1" preserveOriginal="0"
> catenateAll="0" catenateWords="0"/>
>   
> 
>   
>
>
> My generated query is
>
>
> +(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nameSearch:mi)
> (nameSearch:5) -(Synonym(nameSearch:putih
> nameSearch:white))*(nameSearch:[64gb/ TO 3gb])*)~4)
>
>
> Now due to automatic conversion of query  to Range query i'm not able
> to find the result
>
>
> Solr Version-6.4.2
>
> Parser- edismax
>
> Thanks,
>
> Aman Deep Singh


Automatic conversion to Range Query

2017-05-04 Thread Aman Deep Singh
Hi,
I'm facing a issue when i'm querying the Solr
my query is "xiomi Mi 5 -white [64GB/ 3GB]"
while my search field definition is

  

  
  
  
  


  
  
  
  
  

  


My generated query is


+(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nameSearch:mi)
(nameSearch:5) -(Synonym(nameSearch:putih
nameSearch:white))*(nameSearch:[64gb/ TO 3gb])*)~4)


Now due to automatic conversion of query  to Range query i'm not able
to find the result


Solr Version-6.4.2

Parser- edismax

Thanks,

Aman Deep Singh


RE: Solr Shingle is not working properly in solr 6.5.0

2017-04-05 Thread Aman Deep Singh
Thanks Steve , Markus.

On 06-Apr-2017 3:26 AM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:

Hello Steve - that will do the job. I am sure it will be well documented in
the reference docs/cwiki as well, so we all can look this up later.

Many thanks,
Markus



-Original message-
> From:Steve Rowe <sar...@gmail.com>
> Sent: Wednesday 5th April 2017 23:50
> To: solr-user@lucene.apache.org
> Subject: Re: Solr Shingle is not working properly in solr 6.5.0
>
> Hi Markus,
>
> Here’s what I included in 6.5.1’s CHANGES.txt (as well as on branch_6x
and master, so it’ll be included in future releases’ CHANGES.txt too):
>
> -
> * SOLR-10423: Disable graph query production via schema configuration
.
>   This fixes broken queries for ShingleFilter-containing query-time
analyzers when request param sow=false.
>   (Steve Rowe)
> -
>
> --
> Steve
> www.lucidworks.com
>
> > On Apr 5, 2017, at 5:43 PM, Markus Jelsma <markus.jel...@openindex.io>
wrote:
> >
> > Steve - please include a broad description of this feature in the next
CHANGES.txt. I will forget about this thread but need to be reminded of why
i could need it :)
> >
> > Thanks,
> > Markus
> >
> >
> > -Original message-
> >> From:Steve Rowe <sar...@gmail.com>
> >> Sent: Wednesday 5th April 2017 23:26
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: Solr Shingle is not working properly in solr 6.5.0
> >>
> >> Aman,
> >>
> >> In forthcoming Solr 6.5.1, this problem will be addressed by setting a
new  option named “enableGraphQueries” to “false".
> >>
> >> Your fieldtype will look like this:
> >>
> >> -
> >> 
> >>  
> >>
> >>
> >>  
> >> 
> >> -
> >>
> >> --
> >> Steve
> >> www.lucidworks.com
> >>
> >>> On Apr 4, 2017, at 5:32 PM, Steve Rowe <sar...@gmail.com> wrote:
> >>>
> >>> Hi Aman,
> >>>
> >>> I’ve created <https://issues.apache.org/jira/browse/SOLR-10423> for
this problem.
> >>>
> >>> --
> >>> Steve
> >>> www.lucidworks.com
> >>>
> >>>> On Mar 31, 2017, at 7:34 AM, Aman Deep Singh <
amandeep.coo...@gmail.com> wrote:
> >>>>
> >>>> Hi Rich,
> >>>> Query creation is correct only thing what causing the problem is that
> >>>> Boolean + query while building the lucene query which causing all
tokens to
> >>>> be matched in the document (equivalent of mm=100%) even though I use
mm=1
> >>>> it was using BOOLEAN + query as
> >>>> normal query one plus one abc
> >>>> Lucene query -
> >>>> +(((+nameShingle:one plus +nameShingle:plus one +nameShingle:one
abc))
> >>>> ((+nameShingle:one plus +nameShingle:plus one abc))
((+nameShingle:one plus
> >>>> one +nameShingle:one abc)) (nameShingle:one plus one abc))
> >>>>
> >>>> Now since my doc contains only one plus one thus --
> >>>> one plus ,plus one, one plus one
> >>>> thus due to Boolean + it was not matching.
> >>>> Thanks,
> >>>> Aman Deep Singh
> >>>>
> >>>> On Fri, Mar 31, 2017 at 4:41 PM Rick Leir <rl...@leirtech.com> wrote:
> >>>>
> >>>>> Hi Aman
> >>>>> Did you try the Admin Analysis tool? It will show you which filters
are
> >>>>> effective at index and query time. It will help you understand why
you are
> >>>>> not getting a mach.
> >>>>> Cheers -- Rick
> >>>>>
> >>>>> On March 31, 2017 2:36:33 AM EDT, Aman Deep Singh <
> >>>>> amandeep.coo...@gmail.com> wrote:
> >>>>>> Hi,
> >>>>>> I was trying to use the shingle filter but it was not creating the
> >>>>>> query as
> >>>>>> desirable.
> >>>>>>
> >>>>>> my schema is
> >>>>>>  >>>>>> positionIncrementGap=
> >>>>>> "100">  
> >>>>>>  >>>>>> class="solr.ShingleFilterFactory" outputUnigrams="false"
> >>>>>> maxShingleSize="4"
> >>>>>> />  
> >>>>>> 
> >>>>>>  >>>>>> st

Re: Solr Shingle is not working properly in solr 6.5.0

2017-03-31 Thread Aman Deep Singh
Hi Rich,
Query creation is correct only thing what causing the problem is that
Boolean + query while building the lucene query which causing all tokens to
be matched in the document (equivalent of mm=100%) even though I use mm=1
it was using BOOLEAN + query as
normal query one plus one abc
Lucene query -
+(((+nameShingle:one plus +nameShingle:plus one +nameShingle:one abc))
((+nameShingle:one plus +nameShingle:plus one abc)) ((+nameShingle:one plus
one +nameShingle:one abc)) (nameShingle:one plus one abc))

Now since my doc contains only one plus one thus --
one plus ,plus one, one plus one
thus due to Boolean + it was not matching.
Thanks,
Aman Deep Singh

On Fri, Mar 31, 2017 at 4:41 PM Rick Leir <rl...@leirtech.com> wrote:

> Hi Aman
> Did you try the Admin Analysis tool? It will show you which filters are
> effective at index and query time. It will help you understand why you are
> not getting a mach.
> Cheers -- Rick
>
> On March 31, 2017 2:36:33 AM EDT, Aman Deep Singh <
> amandeep.coo...@gmail.com> wrote:
> >Hi,
> >I was trying to use the shingle filter but it was not creating the
> >query as
> >desirable.
> >
> >my schema is
> > >positionIncrementGap=
> >"100">  
> > >class="solr.ShingleFilterFactory" outputUnigrams="false"
> >maxShingleSize="4"
> >/>  
> >
> > >stored="true"/>
> >
> >my solr query is
> >
> http://localhost:8983/solr/productCollection/select?defType=edismax=true=one%20plus%20one%20four=nameShingle;
> >*sow=false*=xml
> >
> >and it was creating the parsed query as
> >
> >(+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
> >+nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
> >+nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one
> >plus
> >one +nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus
> >one
> >four)))~1)/no_coord
> >
> >
> >*++nameShingle:one plus +nameShingle:plus one +nameShingle:one
> >four))
> >((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
> >plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*
> >
> >
> >
> >So ideally token creations is perfect but in the query it is using
> >boolean + operator which is causing the problem as if i have a document
> >with name as
> >"one plus one" ,according to the shingles it has to matched as its
> >token
> >will be  ("one plus","one plus one","plus one") .
> >I have tried using the q.op and played around the mm also but nothing
> >is
> >giving me the correct response.
> >Any idea how i can fetch that document even if the document is missing
> >any
> >token.
> >
> >My expected response will be getting the document
> >"one plus one" even the user query has any additional term like "one
> >plus
> >one two" and so on.
> >
> >
> >Thanks,
> >Aman Deep Singh
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.


Solr Shingle is not working properly in solr 6.5.0

2017-03-31 Thread Aman Deep Singh
Hi,
I was trying to use the shingle filter but it was not creating the query as
desirable.

my schema is
  


my solr query is
http://localhost:8983/solr/productCollection/select?defType=edismax=true=one%20plus%20one%20four=nameShingle;
*sow=false*=xml

and it was creating the parsed query as

(+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
+nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
+nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one plus
one +nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus one
four)))~1)/no_coord


*++nameShingle:one plus +nameShingle:plus one +nameShingle:one four))
((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*



So ideally token creations is perfect but in the query it is using
boolean + operator which is causing the problem as if i have a document
with name as
"one plus one" ,according to the shingles it has to matched as its token
will be  ("one plus","one plus one","plus one") .
I have tried using the q.op and played around the mm also but nothing is
giving me the correct response.
Any idea how i can fetch that document even if the document is missing any
token.

My expected response will be getting the document
"one plus one" even the user query has any additional term like "one plus
one two" and so on.


Thanks,
Aman Deep Singh


Re: unable to get more throughput with more threads

2017-03-23 Thread Aman Deep Singh
You can play with the merge factor in the index config.
If their is no frequent updates then make it 2 ,it will give you High
throughput and less latency.

On 24-Mar-2017 8:22 AM, "Zheng Lin Edwin Yeo"  wrote:

> I also did find that beyond 10 threads for 8GB heap size , there isn't much
> improvement with the performance. But you can increase your heap size a
> little if your system allows it.
>
> By the way, which Solr version are you using?
>
> Regards,
> Edwin
>
>
> On 24 March 2017 at 09:21, Matt Magnusson  wrote:
>
> > Out of curosity, what is your index size? I'm trying to do something
> > similar with maximizing output, I'm currently looking at streaming
> > expressions which I'm seeing some interesting results for, I'm also
> > finding that the direct mass query route seems to hit a wall for
> > performance. I'm also finding that about 10 threads seems to be an
> > optimum number.
> >
> > On Thu, Mar 23, 2017 at 8:10 PM, Suresh Pendap 
> > wrote:
> > > Hi,
> > > I am new to SOLR search engine technology and I am trying to get some
> > performance numbers to get maximum throughput from the SOLR cluster of a
> > given size.
> > > I am currently doing only query load testing in which I randomly fire a
> > bunch of queries to the SOLR cluster to generate the query load.  I
> > understand that it is not the ideal workload as the
> > > ingestion and commits happening invalidate the Solr Caches, so it is
> > advisable to perform query load along with some documents being ingested.
> > >
> > > The SOLR cluster was made up of 2 shards and 2 replicas. So there were
> > total 4 replicas serving the queries. The SOLR nodes were running on an
> LXD
> > container with 12 cores and 88GB RAM.
> > > The heap size allocated was 8g min and 8g max. All the other SOLR
> > configurations were default.
> > >
> > > The client node was running on an 8 core VM.
> > >
> > > I performed the test with 1 thread, 10 client threads and 50 client
> > threads.  I noticed that as I increased the number of threads, the query
> > latency kept increasing drastically which I was not expecting.
> > >
> > > Since my initial test was randomly picking queries from a file, I
> > decided to keep things constant and ran the program which fired the same
> > query again and again. Since it is the same query, all the documents will
> > > be in the Cache and the query response time should be very fast. I was
> > also expecting that with 10 or 50 client threads, the query latency
> should
> > not be increasing.
> > >
> > > The throughput increased only up to 10 client threads but then it was
> > same for 50 threads, 100 threads and the latency of the query kept
> > increasing as I increased the number of threads.
> > > The query was returning 2 documents only.
> > >
> > > The table below summarizes the numbers that I was saying with a single
> > query.
> > >
> > >
> > >
> > >
> > >
> > > #No of Client Nodes
> > > #No of Threads  99 pct Latency  95 pct latency  throughput
> > CPU Utilization Server Configuration
> > >
> > > 1   1   9 ms7 ms180 reqs/sec8%
> > >
> > > Heap size: ms=8g, mx=8g
> > >
> > > default configuration
> > >
> > >
> > > 1   10  400 ms  360 ms  360 reqs/sec10%
> > >
> > > Heap size: ms=8g, mx=8g
> > >
> > > default configuration
> > >
> > >
> > >
> > >
> > > I also ran the client program on the SOLR server node in order to rule
> > our the network latency factor. On the server node also the response time
> > was higher for 10 threads, but the amplification was smaller.
> > >
> > > I am getting an impression that probably my query requests are getting
> > queued up and limited due to probably some thread pool size on the server
> > side.  However I saw that the default jetty.xml does
> > > have the thread pool of min size of 10 and  max of 1.
> > >
> > > Is there any other internal SOLR thread pool configuration which might
> > be limiting the query response time?
> > >
> > > I wanted to check with the community if what I am seeing is abnormal
> > behavior, what could be the issue?  Is there any configuration that I can
> > tweak to get better query response times for more load?
> > >
> > > Regards
> > > Suresh
> > >
> >
>


Re: Solr shingles is not working in solr 6.4.0

2017-03-21 Thread Aman Deep Singh
I found a workaround
after configuring the field type as

  

So after giving the query as *one\ plus\ one* ,it started the creating the
shingles but for using that i have to give the query with omitting spaces
which is caused some problem in other fields ,Any way to overcome that.


On Fri, Mar 17, 2017 at 9:58 AM Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:

> I also tried in 5.2.1
> for the query
>
> http://localhost:8984/solr/test/select?q=TITLE_SH:one\%20plus\%20one=xml=true
> <http://localhost:8984/solr/test/select?q=TITLE_SH:one%5C%20plus%5C%20one=xml=true>
>
> 
> 
> 0
> 1
> 
> TITLE_SH:one\ plus\ one
> xml
> true
> 
> 
> 
> 
> TITLE_SH:one\ plus\ one
> TITLE_SH:one\ plus\ one
> 
> *((TITLE_SH:one plus TITLE_SH:one plus one)/no_coord) TITLE_SH:plus one*
> 
> 
> (TITLE_SH:one plus TITLE_SH:one plus one) TITLE_SH:plus one
> 
> 
> LuceneQParser
>
>
> while in the solr 4.3.1
> query
>
> http://localhost:8983/solr/collection1/select?q=text_sh:one\%20plus\%20one=xml=true
> <http://localhost:8983/solr/collection1/select?q=text_sh:one%5C%20plus%5C%20one=xml=true>
>
> output is like
> 
> 
> 0
> 2
> 
> text_sh:one\ plus\ one
> xml
> true
> 
> 
> 
> 
> text_sh:one\ plus\ one
> text_sh:one\ plus\ one
> 
> (text_sh:one plus text_sh:one plus one text_sh:plus one)/no_coord
> 
> 
> *text_sh:one plus text_sh:one plus one text_sh:plus one*
> 
> 
> LuceneQParser
>
> On Fri, Mar 17, 2017 at 9:50 AM Shawn Heisey <apa...@elyograg.org> wrote:
>
> On 3/16/2017 1:40 PM, Alexandre Rafalovitch wrote:
> > Oh. Try your query with quotes around the phone phrase:
> > q="one plus one"
>
> That query with the fieldType the user supplied produces this, on 6.3.0
> with the lucene parser:
>
> "querystring":"test:\"one plus one\"",
> "parsedquery":"MultiPhraseQuery(test:\"(one plus one plus one) plus
> one\")", Looks a little odd, but maybe it's correct.
> > My hypothesis is:
> > Query parser splits things on whitespace before passing it down into
> > analyzer chain as individual match attempts. The Analysis UI does not
> > take that into account and treats the whole string as phrase sent. You
> > say
> > outputUnigrams="false" outputUnigramsIfNoShingles="false"
> > So, every single token during the query gets ignored because there is
> > nothing for it to shingle with.
>
> Might be that.
>
> If I change both of those unigram options to "true" then this is what I
> see (also on 6.3.0, q.op is AND):
>
> "querystring":"test:(one plus one)", "parsedquery":"+test:one +test:plus
> +test:one",
>
> The really mystifying thing is ... it works on the analysis page.  The
> whitespace tokenizer should (in theory at least) produce the same tokens
> on the analysis page as the query parser does before analysis, so I have
> no idea why analysis and query produce different results.  During query
> analysis, the whitespace tokenizer should basically be a no-op, because
> the input has already been tokenized.
>
> If I change the analysis to this (keyword instead of whitespace):
>
> 
>   
>   
>maxShingleSize="5"
>  outputUnigrams="false"
> outputUnigramsIfNoShingles="false" />
> 
>
> Then the behavior is unchanged:
>
> "querystring":"test:(one plus one)", "parsedquery":"",
>
> > I am not sure why it would have worked in Solr 4.
>
> I just tried it on on 4.9-SNAPSHOT, compiled 2015-05-20 from SVN
> revision 1680667, and it doesn't work.  I don't remember whether this
> was compiled from branch_4x or from the 4.9 branch.  Before that test, I
> had tried back to 5.2.1 with the same results:
>
> "querystring": "test:(one plus one)", "parsedquery": "", Thanks,
> Shawn
>
>


Re: Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
I also tried in 5.2.1
for the query
http://localhost:8984/solr/test/select?q=TITLE_SH:one\%20plus\%20one=xml=true



0
1

TITLE_SH:one\ plus\ one
xml
true




TITLE_SH:one\ plus\ one
TITLE_SH:one\ plus\ one

*((TITLE_SH:one plus TITLE_SH:one plus one)/no_coord) TITLE_SH:plus one*


(TITLE_SH:one plus TITLE_SH:one plus one) TITLE_SH:plus one


LuceneQParser


while in the solr 4.3.1
query
http://localhost:8983/solr/collection1/select?q=text_sh:one\%20plus\%20one=xml=true

output is like


0
2

text_sh:one\ plus\ one
xml
true




text_sh:one\ plus\ one
text_sh:one\ plus\ one

(text_sh:one plus text_sh:one plus one text_sh:plus one)/no_coord


*text_sh:one plus text_sh:one plus one text_sh:plus one*


LuceneQParser

On Fri, Mar 17, 2017 at 9:50 AM Shawn Heisey  wrote:

> On 3/16/2017 1:40 PM, Alexandre Rafalovitch wrote:
> > Oh. Try your query with quotes around the phone phrase:
> > q="one plus one"
>
> That query with the fieldType the user supplied produces this, on 6.3.0
> with the lucene parser:
>
> "querystring":"test:\"one plus one\"",
> "parsedquery":"MultiPhraseQuery(test:\"(one plus one plus one) plus
> one\")", Looks a little odd, but maybe it's correct.
> > My hypothesis is:
> > Query parser splits things on whitespace before passing it down into
> > analyzer chain as individual match attempts. The Analysis UI does not
> > take that into account and treats the whole string as phrase sent. You
> > say
> > outputUnigrams="false" outputUnigramsIfNoShingles="false"
> > So, every single token during the query gets ignored because there is
> > nothing for it to shingle with.
>
> Might be that.
>
> If I change both of those unigram options to "true" then this is what I
> see (also on 6.3.0, q.op is AND):
>
> "querystring":"test:(one plus one)", "parsedquery":"+test:one +test:plus
> +test:one",
>
> The really mystifying thing is ... it works on the analysis page.  The
> whitespace tokenizer should (in theory at least) produce the same tokens
> on the analysis page as the query parser does before analysis, so I have
> no idea why analysis and query produce different results.  During query
> analysis, the whitespace tokenizer should basically be a no-op, because
> the input has already been tokenized.
>
> If I change the analysis to this (keyword instead of whitespace):
>
> 
>   
>   
>maxShingleSize="5"
>  outputUnigrams="false"
> outputUnigramsIfNoShingles="false" />
> 
>
> Then the behavior is unchanged:
>
> "querystring":"test:(one plus one)", "parsedquery":"",
>
> > I am not sure why it would have worked in Solr 4.
>
> I just tried it on on 4.9-SNAPSHOT, compiled 2015-05-20 from SVN
> revision 1680667, and it doesn't work.  I don't remember whether this
> was compiled from branch_4x or from the 4.9 branch.  Before that test, I
> had tried back to 5.2.1 with the same results:
>
> "querystring": "test:(one plus one)", "parsedquery": "", Thanks,
> Shawn
>
>


Re: Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
No it doesn't work

On 17-Mar-2017 8:38 AM, "Alexandre Rafalovitch" <arafa...@gmail.com> wrote:

> Which is what I believe you had as a working example in your Dropbox
> images.
>
> So, does it work now?
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 16 March 2017 at 22:22, Aman Deep Singh <amandeep.coo...@gmail.com>
> wrote:
> > If I give query in quotes it converted query in to graph query as
> >
> > Graph(cust_sh6:"one plus one" hasBoolean=false hasPhrase=false)
> >
> >
> > On 17-Mar-2017 1:38 AM, "Alexandre Rafalovitch" <arafa...@gmail.com>
> wrote:
> >
> >> Oh. Try your query with quotes around the phone phrase:
> >> q="one plus one"
> >>
> >> My hypothesis is:
> >> Query parser splits things on whitespace before passing it down into
> >> analyzer chain as individual match attempts. The Analysis UI does not
> >> take that into account and treats the whole string as phrase sent. You
> >> say
> >> outputUnigrams="false" outputUnigramsIfNoShingles="false"
> >> So, every single token during the query gets ignored because there is
> >> nothing for it to shingle with.
> >>
> >> I am not sure why it would have worked in Solr 4.
> >>
> >> Regards,
> >>Alex.
> >> 
> >> http://www.solr-start.com/ - Resources for Solr users, new and
> experienced
> >>
> >>
> >> On 16 March 2017 at 13:06, Aman Deep Singh <amandeep.coo...@gmail.com>
> >> wrote:
> >> > For images dropbox url is
> >> > https://www.dropbox.com/sh/6dy6a8ajabjtxrt/
> >> AAAoxhZQe2vp3sTl3Av71_eHa?dl=0
> >> >
> >> >
> >> > On Thu, Mar 16, 2017 at 10:29 PM Aman Deep Singh <
> >> amandeep.coo...@gmail.com>
> >> > wrote:
> >> >
> >> >> Yes I have reloaded the core after config changes
> >> >>
> >> >>
> >> >> On 16-Mar-2017 10:28 PM, "Alexandre Rafalovitch" <arafa...@gmail.com
> >
> >> >> wrote:
> >> >>
> >> >> Images do not come through.
> >> >>
> >> >> But I was wrong too. You use eDismax and pass "cust_shingle" in, so
> >> >> the "df" value is irrelevant.
> >> >>
> >> >> You definitely reloaded the core after changing definitions?
> >> >> 
> >> >> http://www.solr-start.com/ - Resources for Solr users, new and
> >> experienced
> >> >>
> >> >>
> >> >> On 16 March 2017 at 12:37, Aman Deep Singh <
> amandeep.coo...@gmail.com>
> >> >> wrote:
> >> >> > Already check that i am sending sceenshots of various senarios
> >> >> >
> >> >> >
> >> >> > On Thu, Mar 16, 2017 at 7:46 PM Alexandre Rafalovitch <
> >> >> arafa...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Sanity check. Is your 'df' pointing at the field you think it is
> >> >> >> pointing at? It really does look like all tokens were eaten and
> >> >> >> nothing was left. But you should have seen that in the Analysis
> >> screen
> >> >> >> too, if you have the right field.
> >> >> >>
> >> >> >> Try adding echoParams=all to your request to see the full final
> >> >> >> parameter list. Maybe some parameters in initParams sections
> override
> >> >> >> your assumed config.
> >> >> >>
> >> >> >> Regards,
> >> >> >>Alex.
> >> >> >> 
> >> >> >> http://www.solr-start.com/ - Resources for Solr users, new and
> >> >> experienced
> >> >> >>
> >> >> >>
> >> >> >> On 16 March 2017 at 08:30, Aman Deep Singh <
> >> amandeep.coo...@gmail.com>
> >> >> >> wrote:
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > Recently I migrated from solr 4 to 6
> >> >> >> > IN solr 4 shinglefilterfactory is working correctly
> >> >> >> > my configration  i
> >> >> >> >
> >> >> >>

Re: Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
If I give query in quotes it converted query in to graph query as

Graph(cust_sh6:"one plus one" hasBoolean=false hasPhrase=false)


On 17-Mar-2017 1:38 AM, "Alexandre Rafalovitch" <arafa...@gmail.com> wrote:

> Oh. Try your query with quotes around the phone phrase:
> q="one plus one"
>
> My hypothesis is:
> Query parser splits things on whitespace before passing it down into
> analyzer chain as individual match attempts. The Analysis UI does not
> take that into account and treats the whole string as phrase sent. You
> say
> outputUnigrams="false" outputUnigramsIfNoShingles="false"
> So, every single token during the query gets ignored because there is
> nothing for it to shingle with.
>
> I am not sure why it would have worked in Solr 4.
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 16 March 2017 at 13:06, Aman Deep Singh <amandeep.coo...@gmail.com>
> wrote:
> > For images dropbox url is
> > https://www.dropbox.com/sh/6dy6a8ajabjtxrt/
> AAAoxhZQe2vp3sTl3Av71_eHa?dl=0
> >
> >
> > On Thu, Mar 16, 2017 at 10:29 PM Aman Deep Singh <
> amandeep.coo...@gmail.com>
> > wrote:
> >
> >> Yes I have reloaded the core after config changes
> >>
> >>
> >> On 16-Mar-2017 10:28 PM, "Alexandre Rafalovitch" <arafa...@gmail.com>
> >> wrote:
> >>
> >> Images do not come through.
> >>
> >> But I was wrong too. You use eDismax and pass "cust_shingle" in, so
> >> the "df" value is irrelevant.
> >>
> >> You definitely reloaded the core after changing definitions?
> >> 
> >> http://www.solr-start.com/ - Resources for Solr users, new and
> experienced
> >>
> >>
> >> On 16 March 2017 at 12:37, Aman Deep Singh <amandeep.coo...@gmail.com>
> >> wrote:
> >> > Already check that i am sending sceenshots of various senarios
> >> >
> >> >
> >> > On Thu, Mar 16, 2017 at 7:46 PM Alexandre Rafalovitch <
> >> arafa...@gmail.com>
> >> > wrote:
> >> >>
> >> >> Sanity check. Is your 'df' pointing at the field you think it is
> >> >> pointing at? It really does look like all tokens were eaten and
> >> >> nothing was left. But you should have seen that in the Analysis
> screen
> >> >> too, if you have the right field.
> >> >>
> >> >> Try adding echoParams=all to your request to see the full final
> >> >> parameter list. Maybe some parameters in initParams sections override
> >> >> your assumed config.
> >> >>
> >> >> Regards,
> >> >>Alex.
> >> >> 
> >> >> http://www.solr-start.com/ - Resources for Solr users, new and
> >> experienced
> >> >>
> >> >>
> >> >> On 16 March 2017 at 08:30, Aman Deep Singh <
> amandeep.coo...@gmail.com>
> >> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > Recently I migrated from solr 4 to 6
> >> >> > IN solr 4 shinglefilterfactory is working correctly
> >> >> > my configration  i
> >> >> >
> >> >> >  >> >> > positionIncrementGap="100">
> >> >> > 
> >> >> >  
> >> >> >   minShingleSize="2"
> >> >> > maxShingleSize="5"
> >> >> >  outputUnigrams="false"
> >> >> > outputUnigramsIfNoShingles="false" />
> >> >> >   
> >> >> > 
> >> >> > 
> >> >> >   
> >> >> >   minShingleSize="2"
> >> >> > maxShingleSize="5"
> >> >> >  outputUnigrams="false"
> >> >> > outputUnigramsIfNoShingles="false" />
> >> >> >   
> >> >> >   
> >> >> > 
> >> >> >   
> >> >> >
> >> >> >
> >> >> >
> >> >> > But after updating to solr 6 shingles is not working ,schema is as
> >> >> > below,
> >> >> >
> >> >> >  >> >> > positionIncrementGap="100">
> >> >> > 
> >> >> >  
> >> >> >   minShingleSize="2"
> >> >> > maxShingleSize="5"
> >> >> >  outputUnigrams="false"
> >> >> > outputUnigramsIfNoShingles="false" />
> >> >> >   
> >> >> > 
> >> >> > 
> >> >> >   
> >> >> >   minShingleSize="2"
> >> >> > maxShingleSize="5"
> >> >> >  outputUnigrams="false"
> >> >> > outputUnigramsIfNoShingles="false" />
> >> >> >   
> >> >> > 
> >> >> >   
> >> >> >
> >> >> > Although in the Analysis tab is was showing proper shingle result
> but
> >> >> > when
> >> >> > using in the queryParser it was not giving proper results
> >> >> >
> >> >> > my sample hit is
> >> >> >
> >> >> >
> >> >> >
> >> http://localhost:8983/solr/shingel_test/select?q=one%
> 20plus%20one=xml=true=edismax=cust_shingle
> >> >> >
> >> >> > it create the parsed query as
> >> >> >
> >> >> > one plus one
> >> >> > one plus one
> >> >> > (+())/no_coord
> >> >> > +()
> >> >> > 
> >> >> > ExtendedDismaxQParser
> >>
> >>
> >>
>


Re: Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
For images dropbox url is
https://www.dropbox.com/sh/6dy6a8ajabjtxrt/AAAoxhZQe2vp3sTl3Av71_eHa?dl=0


On Thu, Mar 16, 2017 at 10:29 PM Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:

> Yes I have reloaded the core after config changes
>
>
> On 16-Mar-2017 10:28 PM, "Alexandre Rafalovitch" <arafa...@gmail.com>
> wrote:
>
> Images do not come through.
>
> But I was wrong too. You use eDismax and pass "cust_shingle" in, so
> the "df" value is irrelevant.
>
> You definitely reloaded the core after changing definitions?
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 16 March 2017 at 12:37, Aman Deep Singh <amandeep.coo...@gmail.com>
> wrote:
> > Already check that i am sending sceenshots of various senarios
> >
> >
> > On Thu, Mar 16, 2017 at 7:46 PM Alexandre Rafalovitch <
> arafa...@gmail.com>
> > wrote:
> >>
> >> Sanity check. Is your 'df' pointing at the field you think it is
> >> pointing at? It really does look like all tokens were eaten and
> >> nothing was left. But you should have seen that in the Analysis screen
> >> too, if you have the right field.
> >>
> >> Try adding echoParams=all to your request to see the full final
> >> parameter list. Maybe some parameters in initParams sections override
> >> your assumed config.
> >>
> >> Regards,
> >>Alex.
> >> 
> >> http://www.solr-start.com/ - Resources for Solr users, new and
> experienced
> >>
> >>
> >> On 16 March 2017 at 08:30, Aman Deep Singh <amandeep.coo...@gmail.com>
> >> wrote:
> >> > Hi,
> >> >
> >> > Recently I migrated from solr 4 to 6
> >> > IN solr 4 shinglefilterfactory is working correctly
> >> > my configration  i
> >> >
> >> >  >> > positionIncrementGap="100">
> >> > 
> >> >  
> >> >   >> > maxShingleSize="5"
> >> >  outputUnigrams="false"
> >> > outputUnigramsIfNoShingles="false" />
> >> >   
> >> > 
> >> > 
> >> >   
> >> >   >> > maxShingleSize="5"
> >> >  outputUnigrams="false"
> >> > outputUnigramsIfNoShingles="false" />
> >> >   
> >> >   
> >> > 
> >> >   
> >> >
> >> >
> >> >
> >> > But after updating to solr 6 shingles is not working ,schema is as
> >> > below,
> >> >
> >> >  >> > positionIncrementGap="100">
> >> > 
> >> >  
> >> >   >> > maxShingleSize="5"
> >> >  outputUnigrams="false"
> >> > outputUnigramsIfNoShingles="false" />
> >> >   
> >> > 
> >> > 
> >> >   
> >> >   >> > maxShingleSize="5"
> >> >  outputUnigrams="false"
> >> > outputUnigramsIfNoShingles="false" />
> >> >   
> >> > 
> >> >   
> >> >
> >> > Although in the Analysis tab is was showing proper shingle result but
> >> > when
> >> > using in the queryParser it was not giving proper results
> >> >
> >> > my sample hit is
> >> >
> >> >
> >> >
> http://localhost:8983/solr/shingel_test/select?q=one%20plus%20one=xml=true=edismax=cust_shingle
> >> >
> >> > it create the parsed query as
> >> >
> >> > one plus one
> >> > one plus one
> >> > (+())/no_coord
> >> > +()
> >> > 
> >> > ExtendedDismaxQParser
>
>
>


Re: Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
Yes I have reloaded the core after config changes

On 16-Mar-2017 10:28 PM, "Alexandre Rafalovitch" <arafa...@gmail.com> wrote:

Images do not come through.

But I was wrong too. You use eDismax and pass "cust_shingle" in, so
the "df" value is irrelevant.

You definitely reloaded the core after changing definitions?

http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 16 March 2017 at 12:37, Aman Deep Singh <amandeep.coo...@gmail.com>
wrote:
> Already check that i am sending sceenshots of various senarios
>
>
> On Thu, Mar 16, 2017 at 7:46 PM Alexandre Rafalovitch <arafa...@gmail.com>
> wrote:
>>
>> Sanity check. Is your 'df' pointing at the field you think it is
>> pointing at? It really does look like all tokens were eaten and
>> nothing was left. But you should have seen that in the Analysis screen
>> too, if you have the right field.
>>
>> Try adding echoParams=all to your request to see the full final
>> parameter list. Maybe some parameters in initParams sections override
>> your assumed config.
>>
>> Regards,
>>    Alex.
>> 
>> http://www.solr-start.com/ - Resources for Solr users, new and
experienced
>>
>>
>> On 16 March 2017 at 08:30, Aman Deep Singh <amandeep.coo...@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > Recently I migrated from solr 4 to 6
>> > IN solr 4 shinglefilterfactory is working correctly
>> > my configration  i
>> >
>> > > > positionIncrementGap="100">
>> > 
>> >  
>> >  > > maxShingleSize="5"
>> >  outputUnigrams="false"
>> > outputUnigramsIfNoShingles="false" />
>> >   
>> > 
>> > 
>> >   
>> >  > > maxShingleSize="5"
>> >  outputUnigrams="false"
>> > outputUnigramsIfNoShingles="false" />
>> >   
>> >   
>> > 
>> >   
>> >
>> >
>> >
>> > But after updating to solr 6 shingles is not working ,schema is as
>> > below,
>> >
>> > > > positionIncrementGap="100">
>> > 
>> >  
>> >  > > maxShingleSize="5"
>> >  outputUnigrams="false"
>> > outputUnigramsIfNoShingles="false" />
>> >   
>> > 
>> > 
>> >   
>> >  > > maxShingleSize="5"
>> >  outputUnigrams="false"
>> > outputUnigramsIfNoShingles="false" />
>> >   
>> > 
>> >   
>> >
>> > Although in the Analysis tab is was showing proper shingle result but
>> > when
>> > using in the queryParser it was not giving proper results
>> >
>> > my sample hit is
>> >
>> >
>> > http://localhost:8983/solr/shingel_test/select?q=one%
20plus%20one=xml=true=edismax=cust_shingle
>> >
>> > it create the parsed query as
>> >
>> > one plus one
>> > one plus one
>> > (+())/no_coord
>> > +()
>> > 
>> > ExtendedDismaxQParser


Re: Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
Already check that i am sending sceenshots of various senarios

On Thu, Mar 16, 2017 at 7:46 PM Alexandre Rafalovitch <arafa...@gmail.com>
wrote:

> Sanity check. Is your 'df' pointing at the field you think it is
> pointing at? It really does look like all tokens were eaten and
> nothing was left. But you should have seen that in the Analysis screen
> too, if you have the right field.
>
> Try adding echoParams=all to your request to see the full final
> parameter list. Maybe some parameters in initParams sections override
> your assumed config.
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 16 March 2017 at 08:30, Aman Deep Singh <amandeep.coo...@gmail.com>
> wrote:
> > Hi,
> >
> > Recently I migrated from solr 4 to 6
> > IN solr 4 shinglefilterfactory is working correctly
> > my configration  i
> >
> >  > positionIncrementGap="100">
> > 
> >  
> >   > maxShingleSize="5"
> >  outputUnigrams="false"
> outputUnigramsIfNoShingles="false" />
> >   
> > 
> > 
> >   
> >   > maxShingleSize="5"
> >  outputUnigrams="false"
> outputUnigramsIfNoShingles="false" />
> >   
> >   
> > 
> >   
> >
> >
> >
> > But after updating to solr 6 shingles is not working ,schema is as below,
> >
> >  > positionIncrementGap="100">
> > 
> >  
> >   > maxShingleSize="5"
> >  outputUnigrams="false"
> outputUnigramsIfNoShingles="false" />
> >   
> > 
> > 
> >   
> >   > maxShingleSize="5"
> >  outputUnigrams="false"
> outputUnigramsIfNoShingles="false" />
> >   
> > 
> >   
> >
> > Although in the Analysis tab is was showing proper shingle result but
> when
> > using in the queryParser it was not giving proper results
> >
> > my sample hit is
> >
> >
> http://localhost:8983/solr/shingel_test/select?q=one%20plus%20one=xml=true=edismax=cust_shingle
> >
> > it create the parsed query as
> >
> > one plus one
> > one plus one
> > (+())/no_coord
> > +()
> > 
> > ExtendedDismaxQParser
>


Solr shingles is not working in solr 6.4.0

2017-03-16 Thread Aman Deep Singh
Hi,

Recently I migrated from solr 4 to 6
IN solr 4 shinglefilterfactory is working correctly
my configration  i



 
 
  


  
 
  
  

  



But after updating to solr 6 shingles is not working ,schema is as below,



 
 
  


  
 
  

  

Although in the Analysis tab is was showing proper shingle result but when
using in the queryParser it was not giving proper results

my sample hit is

http://localhost:8983/solr/shingel_test/select?q=one%20plus%20one=xml=true=edismax=cust_shingle

it create the parsed query as

one plus one
one plus one
(+())/no_coord
+()

ExtendedDismaxQParser