As of Solr 6.6, payload support has been added to Solr, see:
SOLR-1485. Before that, it was much more difficult, see:
https://lucidworks.com/2014/06/13/end-to-end-payload-example-in-solr/
Best,
Erick
On Thu, Feb 8, 2018 at 8:36 AM, Ahmet Arslan wrote:
>
>
> Hi Roy,
>
>
> In order to activate pay
Hi Roy,
In order to activate payloads during scoring, you need to do two separate
things at the same time:
* use a payload aware query type: org.apache.lucene.queries.payloads.*
* use payload aware similarity
Here is an old post that might inspire you :
https://lucidworks.com/2009/08/05/get
Thanks for your replies. But still, I am not sure about the way to do the
thing. Can you please provide me with an example code snippet or, link to
some page where I can find one?
Thanks..
On Tue, Jan 16, 2018 at 3:28 PM, Dwaipayan Roy
wrote:
> I want to make a scoring function that will score
If you are working with payloads, you will also want to have a look at
PayloadScoreQuery.
Le mar. 16 janv. 2018 à 12:26, Michael Sokolov a
écrit :
> Have a look at Expressions class. It compiles JavaScript that can reference
> other values and can be used for ranking.
>
> On Jan 16, 2018 4:58 AM
Have a look at Expressions class. It compiles JavaScript that can reference
other values and can be used for ranking.
On Jan 16, 2018 4:58 AM, "Dwaipayan Roy" wrote:
> I want to make a scoring function that will score the documents by the
> following function:
> given Q = {q1, q2, ... }
> score
On Sat, Oct 8, 2011 at 3:37 AM, Joel Halbert wrote:
> Hi,
>
> Does anyone have a modified scoring (Similarity) function they would
> care to share?
>
> I'm searching web page documents and find the default Similarity seems
> to assign too much weight to documents with frequent occurrence of a
> si
That's what phaseQuery does.
Try phaseQuery to match the overlap, i think
On Sat, Oct 8, 2011 at 3:37 PM, Joel Halbert wrote:
> Hi,
>
> Does anyone have a modified scoring (Similarity) function they would
> care to share?
>
> I'm searching web page documents and find the default Similarity seems
So the normalization was made through Hits. That was something I didn't
understand.
If I was alone I would search in Scorer and query classes.
Thank you for this.
Finally I used the following:
final HitQueue hq = new HitQueue(results.length());
searcher.search(qr, new HitCollector
ECTED]>
> To: java-user@lucene.apache.org
> Sent: Thursday, January 18, 2007 5:36:21 PM
> Subject: Re: custom similarity based on tf but greater than 1.0
>
> I just did the same thing. If you search the list you'll find the thread
> where Hoss gave me the info you n
: java-user@lucene.apache.org
Sent: Thursday, January 18, 2007 5:36:21 PM
Subject: Re: custom similarity based on tf but greater than 1.0
I just did the same thing. If you search the list you'll find the thread
where Hoss gave me the info you need. It really comes down to makeing a
FakeNormsIndexRea
It is 4 in the morning here in Greece, so I will try it tomorrow...sometime I
must sleep!
I will come up with the results tomorrow.
Thanks!
Vagelis
markrmiller wrote:
>
> A...I brushed over your example too fast...looked like normal
> counting to me...I see now what you mean. So OMIT_NORM
A...I brushed over your example too fast...looked like normal
counting to me...I see now what you mean. So OMIT_NORMS probably did
work. Are you getting the results through hits? Hits will normalize. Use
topdocs or a hitcollector.
- Mark
Vagelis Kotsonis wrote:
But i don't want to get th
But i don't want to get the frequency of each term in the doc.
what I want is 1 if the term exists in the doc and 0 if it doesn't. After
this, I want all thes 1s and 0s to be summed and give me a number to use as
a score.
If I set the TF value as 1 or 0, as I described above, I get the right
num
Dont return 1 for tf...just return the tf straight with no
changes...return freq. For everything else return 1. After that
OMIT_NORMS should work. If you want to try a custom reader:
public class FakeNormsIndexReader extends FilterIndexReader {
byte[] ones = SegmentReader.createFakeNorms(max
I feel kind of stupid...I don't get what hossman says in his post.
I got the thing abou the OMMIT_NORMS and I tried to do it by calling
Field.setOmitNorms(true); before adding a field in the index. After that I
re-indexed my collection but still not making any difference.
Tell me if I got it rig
Sorry your having trouble find it! Allow me...bingo:
http://www.gossamer-threads.com/lists/lucene/java-user/43251?search_string=sorting%20by%20per%20doc%20hit;#43251
Prob doesn't have great keyword for finding it. That should get you
going though. Let me know if you have any questions.
- Mark
Before I make this questions I have been looking the list for over 2 hours
and I didn't find something to make me understand how to do what I want.
After you sent the message I made a quick pass through all your messages,
but I didn't find something. I also searched for FakeNormsIndexReader and
s
I just did the same thing. If you search the list you'll find the thread
where Hoss gave me the info you need. It really comes down to makeing a
FakeNormsIndexReader. The problem you are having is a result of the
field size normalization.
- mark
Vagelis Kotsonis wrote:
Hi all.
I am trying to
18 matches
Mail list logo