Hi,
The indexing settings of FieldTypes are not available in the index. The
FieldType information is only used during indexing. IndexReader.document() only
returns stored fields, nothing more.
This is one reason why Lucene 5.x (currently trunk), no longer shares the same
Document / Field API
Hi,
1 indexWriter.deleteDocuments(query); // same for terms arg
2 if (indexWriter.hasUncommittedChanges()) {
3 indexWriter.commit();
4 }
hasUncommittedChanges will return true if you deleted (by Term or
Query), even if that Term or Query matches no documents.
Mhm, this is
Uwe,
Thanks for the response. This is what I expected, and it's unfortunate (for
us). Our software puts an abstraction layer above Lucene (for historical
reasons), and expects to be able to pull the same type of term (here's the
abstraction) from the index that it puts in. It doesn't appear as
On Fri, Jan 17, 2014 at 4:59 AM, Mindaugas Žakšauskas min...@gmail.com wrote:
Hi,
1 indexWriter.deleteDocuments(query); // same for terms arg
2 if (indexWriter.hasUncommittedChanges()) {
3 indexWriter.commit();
4 }
hasUncommittedChanges will return true if you deleted (by Term or
On Fri, Jan 17, 2014 at 12:13 PM, Michael McCandless
Backing up, what is your app doing, that it so strongly relies on
knowing whether commit() would do anything? Usually, commit is
something you call rarely, for safety purposes to ensure if the
world comes crashing down, you'll have a known
On Fri, Jan 17, 2014 at 7:42 AM, Mindaugas Žakšauskas min...@gmail.com wrote:
On Fri, Jan 17, 2014 at 12:13 PM, Michael McCandless
Backing up, what is your app doing, that it so strongly relies on
knowing whether commit() would do anything? Usually, commit is
something you call rarely, for
Are you sure you're using 4.4?
Because ... this looks like
https://issues.apache.org/jira/browse/LUCENE-5048 but that was
supposedly fixed in 4.4.
Mike McCandless
http://blog.mikemccandless.com
On Thu, Jan 16, 2014 at 5:33 PM, Matthew Petersen mdpe...@gmail.com wrote:
I’m having an issue
You might want to look at the soft/hard commit options for insuring
data integrity .vs. latency.
Here's a blog on this topic at the Solr level, but all the Solr stuff
is realized at the Lucene level
eventually, so
Hey!
Lucene's API have the ability to change a document, removes and adds the
document. I have the need to add/remove a term by docId/field.
There is the possibility to perform the link between a term with a field
and its existing document? (field - terms - term - DocIds)
writer.removeTerm
Hey!
Lucene's API have the ability to change a document, removes and adds the
document. I have the need to add/remove a term by docId/field.
There is the possibility to perform the link between a term with a field
and its existing document? (field - terms - term - DocIds)
writer.removeTerm
I'm sure. I had seen that issue and it looked similar but the stack trace
is slightly different. I've found that if I replace the
Cl2oTaxonomyWriterCache with the LruTaxonomyWriterCache the problem seems
to go away. I'm working right now on running a test that will prove this
but it takes a
FYI for those with spatial interests…
From: Smiley, Smiley, David W. dsmi...@mitre.orgmailto:dsmi...@mitre.org
Date: Friday, January 17, 2014 at 11:53 AM
To: Demeter Sztanko szta...@gmail.commailto:szta...@gmail.com
Cc:
Hi,
yes, Lucene is not for OCR. We are using another library for OCR. But we
need to get the some source for Lucene. Thanks for the link, I'll take a
look at them.
Bye,
Deniz
On Thu, Jan 16, 2014 at 10:05 PM, Allison, Timothy B. talli...@mitre.orgwrote:
To confirm, Lucene does not perform
I've confirmed that using the LruTaxonomyWriterCache solves the issue for
me. It would appear there is in fact a bug in the Cl20TaxonomyWriterCache
or I am using it incorrectly (I use it as default, no customization).
On Fri, Jan 17, 2014 at 9:29 AM, Matthew Petersen mdpe...@gmail.com wrote:
Hi all,
In Lucene 3.x the RAMDirectory was Serializable.
In 4.x not any more...
what's the best/most performant/easies way to serialize the RAMDir in 4.6.0?
TIA
--
View this message in context:
http://lucene.472066.n3.nabble.com/Serializing-RAMDirectory-in-4-6-0-tp4111999.html
Sent from
Do you have a test which reproduces the error? Are you adding categories
with very deep hierarchies?
Shai
On Fri, Jan 17, 2014 at 11:59 PM, Matthew Petersen mdpe...@gmail.comwrote:
I've confirmed that using the LruTaxonomyWriterCache solves the issue for
me. It would appear there is in fact
I do have a test that will reproduce. I'm not adding categories with very
deep hierarchies, I'm adding 129 category paths per document (all docs have
paths with same label) with each path having one value. All of the values
are completely random and likely unique. It's basically a worst case
Can you open an issue and attach the test there?
On Jan 18, 2014 12:41 AM, Matthew Petersen mdpe...@gmail.com wrote:
I do have a test that will reproduce. I'm not adding categories with very
deep hierarchies, I'm adding 129 category paths per document (all docs have
paths with same label)
18 matches
Mail list logo