Re: Query Boosting and sort

2017-09-08 Thread Renuka Srishti
Thanks Rick and Erick for your response.
Here is the situation where I want to use both sort and phrase boosting:

   - I have designed a screen where results are showing in tabular form, in
   each column I have applied sorting( using Solr sort parameter). There is
   one keyword search box, in which I have applied the phrase boosting to
   maintain relevancy.( Most relevant result will show on the top).


   - Now If I applied keyword search and then I want to sort the result,
   how I can achieve this?(Sorting completely overrides scoring).

Thanks

Renuka Srishti

On Sat, Sep 9, 2017 at 1:38 AM, Erick Erickson 
wrote:

> Sorting completely overrides scoring. By specifying a sort parameter
> you're effectively telling Solr that you don't care about scoring,
> order the docs by the sort criteria.
>
> On Fri, Sep 8, 2017 at 3:35 AM, Rick Leir  wrote:
> > Renuka,
> >
> > You have not told us nearly enough about your issue. What query? config?
> >
> > cheers -- Rick
> >
> >
> >
> > On 2017-09-08 05:42 AM, Renuka Srishti wrote:
> >>
> >> Hello All,
> >>
> >> I am trying to use sort parameter and phrase boosting together in
> search.
> >> But, if I use the sort parameter, it seems like Phrase Boosting does not
> >> work with it.
> >>
> >> Thanks
> >> Renuka Srishti
> >>
> >
>


Re: Query Boosting and sort

2017-09-08 Thread Erick Erickson
Sorting completely overrides scoring. By specifying a sort parameter
you're effectively telling Solr that you don't care about scoring,
order the docs by the sort criteria.

On Fri, Sep 8, 2017 at 3:35 AM, Rick Leir  wrote:
> Renuka,
>
> You have not told us nearly enough about your issue. What query? config?
>
> cheers -- Rick
>
>
>
> On 2017-09-08 05:42 AM, Renuka Srishti wrote:
>>
>> Hello All,
>>
>> I am trying to use sort parameter and phrase boosting together in search.
>> But, if I use the sort parameter, it seems like Phrase Boosting does not
>> work with it.
>>
>> Thanks
>> Renuka Srishti
>>
>


Re: Consecutive calls to a query give different results

2017-09-08 Thread Erick Erickson
Here's Mike McCandless' blog on the topic:

https://www.elastic.co/blog/lucenes-handling-of-deleted-documents

The same options he mentions are available in Solr as both use Lucene
under the covers.

The long and short of it is that you can have a significant amount of
deleted documents in your index, depending on the update pattern.

One thing Mike doesn't mention is at the root of why I'm so negative
about optimize (and forceMerge is just an optimize that only mashes
segments together if they have > X% deleted docs). Let's say your max
segment size is 5G. And you optimize an index down to a single 100G
segment. That segment will _not_ be merged until it has < 2.5G live
docs. That's not a typo. 97.5% deleted docs..

You could ameliorate this somewhat by specifying the number of
segments after optimizing (default is 1). Say you determine that you
have 100G of live data, specify 20 segments for optimize. This would
be better I'd guess, but haven't tested personally.

Best,
Erick

On Fri, Sep 8, 2017 at 10:36 AM, Webster Homer  wrote:
> Thank you, Erick Erickson and Shawn Heisey for your excellent answers.
> For some of our collections, it would seem that an occasional optimize
> would be a good thing. However we have some collections that are updated
> constantly
>
> Would using the commit expungeDeletes help mitigate the issue?
>
> I also came across a discussion of Lucene merge policies. and the
> TieredMergePolicy.
> Is there documentation about this? I notice that a couple of our replicas
> in some of our collections have ~30% deleted documents which I would think
> would contribute to the problem.
> I have at least 3 collections that are updated constantly, and would not
> lend themselves to being optimized what is the best approach for these?
>
> Thanks
>
> On Fri, Sep 8, 2017 at 9:47 AM, Shawn Heisey  wrote:
>
>> On 9/7/2017 8:54 AM, Webster Homer wrote:
>> > I am not concerned about deleted documents. I am concerned that the same
>> > search gives different results after each search. The top document seems
>> to
>> > cycle between 3 different documents
>> >
>> > I have an enhanced collections info api call that calls the core admin
>> api
>> > to get the index information for the replica.
>> > When I said the numdocs were the same I meant exactly that. maxdocs and
>> > deleted documents are not the same for the replicas, but the number of
>> > numdocs is.
>> >
>> > Or are you saying that the search is looking at deleted documents
>> wouldn't
>> > that be a very significant bug?
>>
>> Lucene score calculations take a lot of information in the index into
>> account when calculating the score.  That includes deleted documents,
>> because they are part of the index.  When you delete a document, Lucene
>> just makes a note saying "internal document ID number  is deleted."
>> The actual information for that document is not removed from the index,
>> because doing so could take a very long time.
>>
>> When you make queries against a replicated SolrCloud, the queries are
>> load balanced across the entire cloud, so different queries will hit
>> different replicas.  With different numbers of deleted documents in
>> different replicas (which is not unusual), the scores are going to come
>> out a little bit different on each query.  If you're sorting by score
>> (which is the default sort), that *can* affect the order.  Your replicas
>> have a fairly high percentage of deleted documents, so there is a lot of
>> extra information affecting the scores.  The relative difference in the
>> deleted document count between the replicas is high as well, so multiple
>> queries could be substantially different.
>>
>> It is not a bug that Lucene and Solr look at deleted documents.
>> Removing deleted document information from things like the score
>> calculation would be VERY computationally intense, bordering on the
>> impossible.  To assure good performance, Lucene doesn't even try.
>> Because the way Lucene tracks deleted documents is with a list of
>> internal Lucene document IDs, those documents are easily removed from
>> *results*, but their contents are an integral part of the index and that
>> information can only be truly removed by completely rewriting (merging)
>> the index segments.
>>
>> You can get rid of all deleted documents with an optimize operation,
>> which is a forced merge of the entire index down to one segment -- but
>> just like it sounds, that is a complete rewrite of the index.  It
>> involves a huge amount of CPU resources and disk I/O, and can severely
>> impact normal indexing and query operations while it's happening.  If
>> the collection is extremely large, an optimize could take hours.  For
>> indexes that change rapidly, optimize is strongly discouraged, except as
>> an occasional "clean things up" operation, run during non-peak times.
>>
>> Thanks,
>> Shawn
>>
>>
>
> --
>
>
> This message and any attachment are confidential and may 

commit time in solr cloud

2017-09-08 Thread Wei
Hi,

In solr cloud we want to track the last commit time on each node. The
information source is from the luke handler:
 admin/luke?numTerms=0=json, e.g.


   - userData:
   {
  - commitTimeMSec: "1504895505447"
  },
   - lastModified: "2017-09-08T18:31:45.447Z"



I'm assuming the lastModified time is when latest hard commit happens. Is
that correct?

On all nodes we have autoCommit set to 15 mins interval. One observation I
don't  understand is quite often the last commit time on shard leaders lags
behind the last commit time on replicas, some times the lag is over 10
minutes.  My understanding is that as update requests goes to leader first,
the timer on the leaders would start earlier than the replicas. Am I
missing something here?

Thanks,
Wei


Re: Consecutive calls to a query give different results

2017-09-08 Thread Webster Homer
Thank you, Erick Erickson and Shawn Heisey for your excellent answers.
For some of our collections, it would seem that an occasional optimize
would be a good thing. However we have some collections that are updated
constantly

Would using the commit expungeDeletes help mitigate the issue?

I also came across a discussion of Lucene merge policies. and the
TieredMergePolicy.
Is there documentation about this? I notice that a couple of our replicas
in some of our collections have ~30% deleted documents which I would think
would contribute to the problem.
I have at least 3 collections that are updated constantly, and would not
lend themselves to being optimized what is the best approach for these?

Thanks

On Fri, Sep 8, 2017 at 9:47 AM, Shawn Heisey  wrote:

> On 9/7/2017 8:54 AM, Webster Homer wrote:
> > I am not concerned about deleted documents. I am concerned that the same
> > search gives different results after each search. The top document seems
> to
> > cycle between 3 different documents
> >
> > I have an enhanced collections info api call that calls the core admin
> api
> > to get the index information for the replica.
> > When I said the numdocs were the same I meant exactly that. maxdocs and
> > deleted documents are not the same for the replicas, but the number of
> > numdocs is.
> >
> > Or are you saying that the search is looking at deleted documents
> wouldn't
> > that be a very significant bug?
>
> Lucene score calculations take a lot of information in the index into
> account when calculating the score.  That includes deleted documents,
> because they are part of the index.  When you delete a document, Lucene
> just makes a note saying "internal document ID number  is deleted."
> The actual information for that document is not removed from the index,
> because doing so could take a very long time.
>
> When you make queries against a replicated SolrCloud, the queries are
> load balanced across the entire cloud, so different queries will hit
> different replicas.  With different numbers of deleted documents in
> different replicas (which is not unusual), the scores are going to come
> out a little bit different on each query.  If you're sorting by score
> (which is the default sort), that *can* affect the order.  Your replicas
> have a fairly high percentage of deleted documents, so there is a lot of
> extra information affecting the scores.  The relative difference in the
> deleted document count between the replicas is high as well, so multiple
> queries could be substantially different.
>
> It is not a bug that Lucene and Solr look at deleted documents.
> Removing deleted document information from things like the score
> calculation would be VERY computationally intense, bordering on the
> impossible.  To assure good performance, Lucene doesn't even try.
> Because the way Lucene tracks deleted documents is with a list of
> internal Lucene document IDs, those documents are easily removed from
> *results*, but their contents are an integral part of the index and that
> information can only be truly removed by completely rewriting (merging)
> the index segments.
>
> You can get rid of all deleted documents with an optimize operation,
> which is a forced merge of the entire index down to one segment -- but
> just like it sounds, that is a complete rewrite of the index.  It
> involves a huge amount of CPU resources and disk I/O, and can severely
> impact normal indexing and query operations while it's happening.  If
> the collection is extremely large, an optimize could take hours.  For
> indexes that change rapidly, optimize is strongly discouraged, except as
> an occasional "clean things up" operation, run during non-peak times.
>
> Thanks,
> Shawn
>
>

-- 


This message and any attachment are confidential and may be privileged or 
otherwise protected from disclosure. If you are not the intended recipient, 
you must not copy this message or attachment or disclose the contents to 
any other person. If you have received this transmission in error, please 
notify the sender immediately and delete the message and any attachment 
from your system. Merck KGaA, Darmstadt, Germany and any of its 
subsidiaries do not accept liability for any omissions or errors in this 
message which may arise as a result of E-Mail-transmission or for damages 
resulting from any unauthorized changes of the content of this message and 
any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its 
subsidiaries do not guarantee that this message is free of viruses and does 
not accept liability for any damages caused by any virus transmitted 
therewith.

Click http://www.emdgroup.com/disclaimer to access the German, French, 
Spanish and Portuguese versions of this disclaimer.


Facing Issue in SOLR 6.6 Indexing - solr unexpected eof in prolog

2017-09-08 Thread ashish sharma
Hello Everyone,
I am trying the new Solr 6.6 and using SolrPhpClient to create index having 
info in an array.
$parts = array( '0' => array( 'id' => '0060248025', 'name' => 'Falling Up', 
'author' => 'Shel Silverstein',  'inStock' => true, ), '1' => array( 'id' => 
'0679805273', 'name' => 'Oh, The Places You will Go', 'author' => 'Dr. Seuss',  
'inStock' => false, ) );
But I am facing an error "solr unexpected eof in prolog". The final XML formed 
and going for indexing is ...
0060248025Falling 
UpShel Silverstein10679805273Oh, The Places You will GoDr. 
Seuss

I am not getting what the issue is & how to resolve. Can anyone help?
Also, I have tried picking one  node and added the same in the example xml 
and tried indexing using command and that is working fine. So seems there is no 
issue with the info I am providing.
Please help! 
ThanksAshish



Re: Consecutive calls to a query give different results

2017-09-08 Thread Shawn Heisey
On 9/7/2017 8:54 AM, Webster Homer wrote:
> I am not concerned about deleted documents. I am concerned that the same
> search gives different results after each search. The top document seems to
> cycle between 3 different documents
>
> I have an enhanced collections info api call that calls the core admin api
> to get the index information for the replica.
> When I said the numdocs were the same I meant exactly that. maxdocs and
> deleted documents are not the same for the replicas, but the number of
> numdocs is.
>
> Or are you saying that the search is looking at deleted documents wouldn't
> that be a very significant bug?

Lucene score calculations take a lot of information in the index into
account when calculating the score.  That includes deleted documents,
because they are part of the index.  When you delete a document, Lucene
just makes a note saying "internal document ID number  is deleted." 
The actual information for that document is not removed from the index,
because doing so could take a very long time.

When you make queries against a replicated SolrCloud, the queries are
load balanced across the entire cloud, so different queries will hit
different replicas.  With different numbers of deleted documents in
different replicas (which is not unusual), the scores are going to come
out a little bit different on each query.  If you're sorting by score
(which is the default sort), that *can* affect the order.  Your replicas
have a fairly high percentage of deleted documents, so there is a lot of
extra information affecting the scores.  The relative difference in the
deleted document count between the replicas is high as well, so multiple
queries could be substantially different.

It is not a bug that Lucene and Solr look at deleted documents. 
Removing deleted document information from things like the score
calculation would be VERY computationally intense, bordering on the
impossible.  To assure good performance, Lucene doesn't even try. 
Because the way Lucene tracks deleted documents is with a list of
internal Lucene document IDs, those documents are easily removed from
*results*, but their contents are an integral part of the index and that
information can only be truly removed by completely rewriting (merging)
the index segments.

You can get rid of all deleted documents with an optimize operation,
which is a forced merge of the entire index down to one segment -- but
just like it sounds, that is a complete rewrite of the index.  It
involves a huge amount of CPU resources and disk I/O, and can severely
impact normal indexing and query operations while it's happening.  If
the collection is extremely large, an optimize could take hours.  For
indexes that change rapidly, optimize is strongly discouraged, except as
an occasional "clean things up" operation, run during non-peak times.

Thanks,
Shawn



Re: Consecutive calls to a query give different results

2017-09-08 Thread Webster Homer
We have several cloud collections, but this one is updated once a day with
a partial load, and once a week with a full load, followed by a delete
which is based upon an index_date field (timestamp of the solr record).

For this and related collections optimizing once per day is probably
acceptable.

We do have other collections that are updated every 15 minutes, I don't
think those would be able to be optimized from what you write.



On Thu, Sep 7, 2017 at 5:10 PM, Erick Erickson 
wrote:

> bq: So apparently it IS essential to run optimize after a data load
>
> Don't do this if you can avoid it, you run the risk of excessive
> amounts of your index consisting of deleted documents unless you are
> following a process whereby you periodically (and I'm talking at least
> hours, if not once per day) index data then don't change the index for
> a bunch more hours.
>
> You're missing the point when it comes to deleted docs. Different
> replicas of the _same_ shard commit at different wall clock times due
> to network delays. Therefore, which segments are merged will not be
> identical between replicas when a commit happens, since commits are
> local.
>
> So replica1 may merge segments 1, 3, 6 in to segment 7
> replica2 may merge segments 1, 2, 4 into segment 7
>
> Here's the key: Now replica1 may have 100 deleted documents (ones
> marked as deleted but still in segments 2, 4 and 5
>  replica2 may have 90 deleted
> documents (the ones still in segments 3, 5 and 6)
>
> The statistics in the term frequency and document frequency for some
> terms are _not_ the same. Therefore the scoring will be slightly
> different. Therefore, depending on which replica serves the query, the
> order of docs may be somewhat different if the scores are close.
>
> optimizing squeezes all the deleted documents out of all the replicas
> so the scores become identical.
>
> This doesn't happen, of course, if you have only one replica.
>
> Best,
> Erick
>
> On Thu, Sep 7, 2017 at 8:13 AM, Webster Homer 
> wrote:
> > We have several solr clouds, a couple of them have only 1 replica per
> > shard. We have never observed the problem when we have a single replica
> > only when there are multiple replicas per shard.
> >
> > On Thu, Sep 7, 2017 at 10:08 AM, Webster Homer 
> > wrote:
> >
> >> the scores are not the same
> >> Doc
> >> 305340 432.44238
> >> C2646 428.24185
> >> 12837 430.61722
> >>
> >> One other thing. I just ran optimize and now document 305340 is
> >> consistently the top score.
> >> So apparently it IS essential to run optimize after a data load
> >>
> >> Note we see this behavior fairly commonly on our solr cloud instances.
> >> This was not the first time. This particular situation was on a
> development
> >> system
> >>
> >> On Thu, Sep 7, 2017 at 10:04 AM, Webster Homer 
> >> wrote:
> >>
> >>> the scores are not the same
> >>> Doc
> >>> 305340 432.44238
> >>>
> >>> On Thu, Sep 7, 2017 at 10:02 AM, David Hastings <
> >>> hastings.recurs...@gmail.com> wrote:
> >>>
>  "I am concerned that the same
>  search gives different results after each search. The top document
> seems
>  to
>  cycle between 3 different documents"
> 
> 
>  if you do debug query on the search, are the scores for the top 3
>  documents
>  the same or not?  you can easily have three documents with the same
>  score,
>  so when you have a result set that is ranked 1-1-1-2-3-4 you can
>  expect
>  1-1-1 to rotate based on whatever.  use a second element like id to
> your
>  ranking perhaps.
> 
> 
> 
> 
>  On Thu, Sep 7, 2017 at 10:54 AM, Webster Homer <
> webster.ho...@sial.com>
>  wrote:
> 
>  > I am not concerned about deleted documents. I am concerned that the
>  same
>  > search gives different results after each search. The top document
>  seems to
>  > cycle between 3 different documents
>  >
>  > I have an enhanced collections info api call that calls the core
> admin
>  api
>  > to get the index information for the replica.
>  > When I said the numdocs were the same I meant exactly that. maxdocs
> and
>  > deleted documents are not the same for the replicas, but the number
> of
>  > numdocs is.
>  >
>  > Or are you saying that the search is looking at deleted documents
>  wouldn't
>  > that be a very significant bug?
>  >
>  > The four replicas:
>  > shard1
>  > core_node1
>  > "numDocs": 383817,
>  > "maxDocs": 611592,
>  > "deletedDocs": 227775,
>  > "size": "2.49 GB",
>  > "lastModified": "2017-09-07T08:18:03.639Z",
>  > "current": true,
>  > "version": 35644,
>  > "segmentCount": 28
>  >
>  > core_node3
>  > "numDocs": 383817,
>  > "maxDocs": 571737,
>  > "deletedDocs": 187920,
>  > "size": 

Re: Conditions with multiple boosts in bf exists query

2017-09-08 Thread Eric Kurzenberger
Thanks for the response, Erick.  Unfortunately, no, these scores aren’t known 
at index time: they’re specific to the user doing the search, and they can 
change.

Cheers,

Eric

On 9/7/17, 7:58 PM, "Erick Erickson"  wrote:

I'd sidestep the problem ;)

Are these scores
1> known at index time
2> unchanging (at least until the doc is re-indexed)?

If so, pre-compute your boost and put it in the doc at index time.

The other thing you can do is use payloads to add a float to specific
tokens and incorporate them in at scoring time. See the Solr
documentation, if you have a relatively recent one the payload support
has been built in to Solr, otherwise here's a primer:
https://lucidworks.com/2014/06/13/end-to-end-payload-example-in-solr/

Best,
Erick

On Thu, Sep 7, 2017 at 8:40 AM, Eric Kurzenberger  
wrote:
> I need to do a bf exists query that matches the following conditions:
>
>
> -  IF a_score = 1 AND b_score = 2 THEN boost 30
>
> -  IF a_score = 3 AND b_score = 4 THEN boost 20
>
> So far, the bf portion of my query looks like this:
>
> if(exists(query({!v="a_score_is:1"})),30,0)
>
> But I’m having difficulty finding the correct syntax for the multiple 
conditions and boosts.
>
> I was originally doing a bq query that looked like this:
>
> bq=(a_score_is:1 AND b_score_is:2)^30 OR (a_score_is:3 AND 
b_score_is:4)^20
>
> but I found that idf was skewing my rexpected esults, as I don’t care 
about document frequency.
>
> Can anyone assist?
>
> Cheers,
>
> Eric
>




Re: Query Boosting and sort

2017-09-08 Thread Rick Leir

Renuka,

You have not told us nearly enough about your issue. What query? config?

cheers -- Rick


On 2017-09-08 05:42 AM, Renuka Srishti wrote:

Hello All,

I am trying to use sort parameter and phrase boosting together in search.
But, if I use the sort parameter, it seems like Phrase Boosting does not
work with it.

Thanks
Renuka Srishti





Query Boosting and sort

2017-09-08 Thread Renuka Srishti
Hello All,

I am trying to use sort parameter and phrase boosting together in search.
But, if I use the sort parameter, it seems like Phrase Boosting does not
work with it.

Thanks
Renuka Srishti