Kumaran,
Below is the code snippet for concurrent writes (i.e. concurrent
updates/deletes etc.) alongwith Search operation using the NRT Manger APIs.
Let me know if you need any other details or have any suggesstion for me :-
public class LuceneEngineInstance implements IndexEngineInstance
{
Hi everyone,
I'm trying to escape special characters and it doesn't seem to be working.
If I do a search like resume_text: (LS\/MS) it searches for LS AND MS
instead of LS/MS. How would I escape the slash so it searches for LS/MS?
Thanks
Take a look at the adnim/analysis page for the field in question.
The next bit of critical information is adding debug=query
to the URL. The former will tell you what happens to the input
stream at query and index time, the latter will tell you how the
query got through the query parsing process.
I'm not using Solr. Here's my code:
FSDirectory fsd = FSDirectory.open(new File(C:\\indexes\\Lucene4));
IndexReader reader = DirectoryReader.open(fsd);
IndexSearcher searcher = new IndexSearcher(reader);
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_4_9,
It does look like the lowercase is working.
The following code
Document theDoc = theIndexReader.document(0);
System.out.println(theDoc.get(sn));
IndexableField theField = theDoc.getField(sn);
TokenStream theTokenStream = theField.tokenStream(theAnalyzer);
I tried to create a clone of indexwriteconfig with
indexWriterConfig.clone() for re-creating a new indexwriter, but I then I
got this very annoying illegalstateexception: clone this object before it
is used. Why does this exception happen, and how can I get around it?
Thanks!
You need to manually enable automatic generation of phrase queries - it
defaults to disabled, which simply treats the sub-terms as individual terms
subject to the default operator.
See:
I found the problem. But it makes no sense to me.
If I set the field type to be tokenized, it works. But if I set it to not
be tokenized the search fails. i.e. I have to pass in true to the method.
theFieldType.setTokenized(storeTokenized);
I want the field to be stored as un-tokenized.
Looks like you have to clone it prior to using with any IndexWriter
instances.
On Mon, Aug 11, 2014 at 2:49 PM, Sheng sheng...@gmail.com wrote:
I tried to create a clone of indexwriteconfig with
indexWriterConfig.clone() for re-creating a new indexwriter, but I then I
got this very annoying
So the indexWriterConfig.clone() failed at this step:
clone.indexerThreadPool = indexerThreadPool
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.lucene/lucene-core/4.7.0/org/apache/lucene/index/LiveIndexWriterConfig.java#LiveIndexWriterConfig.0indexerThreadPool
.clone
I only have the source to 4.6.1, but if you look at the constructor of
IndexWriter there, it looks like this:
public IndexWriter(Directory d, IndexWriterConfig conf) throws
IOException {
conf.setIndexWriter(this); // prevent reuse by other instances
The setter throws an exception if the
From src code of DocumentsWriterPerThreadPool, the variable
numThreadStatesActive seems to be always increasing, which explains why
asserting on numThreadStatesActive == 0 before cloning this object
fails. So what should be the most appropriate way of re-opening an
indexwriter if what you have are
12 matches
Mail list logo