Robert,
thanks, this is helpful but why did this change when it was great to
use? SortField is a new concept to me and i am not sure this is avail
in 6.6.0 but will check.
This new way seems more tricky.
if there are more examples, i will be happier :)
Best regards
On 7/31/18 6:19 PM,
Does this example help?
https://lucene.apache.org/core/7_4_0/expressions/org/apache/lucene/expressions/Expression.html
On Tue, Jul 31, 2018 at 3:56 PM, wrote:
> The following page says:
>
> http://lucene.apache.org/core/6_6_0/core/org/apache/lucene/document/Field.html#setBoost-float-
>
>
https://stackoverflow.com/questions/50952727/ho-to-use-functionscorequery-with-text-fields
Somebody else was also asking this.
Best regards
On 7/31/18 3:56 PM, baris.ka...@oracle.com wrote:
The following page says:
The following page says:
http://lucene.apache.org/core/6_6_0/core/org/apache/lucene/document/Field.html#setBoost-float-
setBoost
@Deprecated
public void setBoost(float boost)
Deprecated. Index-time boosts are deprecated, please index index-time
scoring factors into a doc value field and
: The query parser is confused by these overlapping positions indeed, which
: it interprets as synonyms. I was going to write that you should set the
Sure -- i'm not blaming the QueryParser, what it does with the
Shingles output makes sense (and actual works! .. just not as efficiently
as
Hi Uwe,
I am trying to implement regex search in file the same as in editors, in
Notepad++ for example.
Thanks,
Ira
-Original Message-
From: Uwe Schindler
Sent: Tuesday, July 31, 2018 6:12 PM
To: java-user@lucene.apache.org
Subject: RE: Search in lines, so need to index lines?
Hi,
Hi,
you need to create your own tokenizer that splits tokens on \n or \r. Instead
of using WhitespaceTokenizer, you can use:
Tokenizer tok = CharTokenizer. fromSeparatorCharPredicate(ch -> ch=='\r' ||
ch=='\n');
But I would first think of how to implement the whole thing correctly. Using a
Hi Bhavin,
you don't tell us what exactly goes wrong with your index. The exception you
get is just telling you that Lucene is not able to get a lock on the index.
This does not corrupt the index.
You say that you are using and running Lucene on a NFS share. Unfortunately,
this is known to
There is no chance anyone will try to change the code for 3.6, so
raising a JIRA is pointless.
see:
http://lucene.472066.n3.nabble.com/Issues-with-locked-indices-td4339180.html
Uwe is very knowledgeable in this area, so I'd strongly recommend you
follow his advice.
Best,
Erick
On Tue, Jul 31,
Hi all,
I understand Lucene knows to find query matches in tokens. For example if I use
WhiteSpaceTokenizer and I am searching with /.*nice day.*/ regular expression,
I'll always find nothing. Am I correct?
In my project I need to find matches inside lines and not inside words, so I am
Hi,
The lucene index file gets corrupted during loadtest of 15 min :- creating
the index with 2 nodes with 60 cocurrent users.
I am using Lucene 3.6 version. The index is created in NFS.
Please let me know does lucene create index works on multiple nodes with
NFS.
The error exception is
The problem is not a performance one, its a complexity thing. Really I
think only the tokenizer should be messing with the offsets...
They are the ones actually parsing the original content so it makes
sense they would produce the pointers back to them.
I know there are some tokenfilters out there
Hi Hoss,
The query parser is confused by these overlapping positions indeed, which
it interprets as synonyms. I was going to write that you should set the
same min and max shingle sizes at query time, but while writing that I
realized that you probably wanted to keep outputing shorter shingles so
13 matches
Mail list logo