On Thu, Aug 22, 2013 at 1:48 AM, Sean Bridges wrote:
> Is there a supported DocValuesFormat that doesn't load all the values into
> ram?
Not with any current release, but in lucene 4.5 if all goes well, the
official implementation will work that way (I spent essentially the
last entire week on th
Is there a supported DocValuesFormat that doesn't load all the values into
ram?
Our use is case is that we have 16 byte ids for all our documents. We used
to store the ids in stored fields, and look up the stored field for each
search hit. We got much better performance when we switched to stori
Got it,thanks
在 2013-8-21 下午10:03,"Uwe Schindler" 写道:
> Hi,
>
> It is numeric order according to the rules of Type#compareTo(Type) [Type =
> Integer, Long, Float, Double]. But be careful: unless the precision step of
> the numeric field is infinite, there are additional helper terms after the
> la
On 08/20/2013 07:53 PM, Mirko Sertic wrote:
> I am using Lucene 4.4, and i am hitting cpu usage limitations on my
> core i7 windows 7 64bit box. Seems like the IO system(ssd) has still
> capacity, but when running 8 threads searching on the index in
> parallel, all logical cpu cores are at 100% usa
Hi,
Could you send the contenf of org.apache.lucene.codecs.Codec and pom.xml?
2013/8/21 Gayo Diallo
> Hi guys,
>
> Thank you both for the help.
>
> @Adriano: we have just tried the solution that worked for you without
> success.
>
> @Duke: we could see in the fat jar in the Meta-Inf/services f
Hi,
Just to check, otherwise we will never find out your problem: In your final
merged JAR file, what are the contents of the File:
META-INF/services/org.apache.lucene.codecs.Codec
(please also list contents of PostingsFormat (because those are also loaded
using SPI).
This file lists al
Hi guys,
Thank you both for the help.
@Adriano: we have just tried the solution that worked for you
without success.
@Duke: we could see in the fat jar in the Meta-Inf/services folder
that we have org.apache.lucene.codecs.Codec and in
org/apache/luc
On Wed, Aug 21, 2013 at 11:30 AM, Sean Bridges wrote:
> What is the recommended way to use DiskDocValuesFormat in production if we
> can't reindex when we upgrade?
I'm not going to recommend using any experimental codecs in production, but...
1. with 4.3 jar file: IWC.setCodec(Codec.getDefault()
What is the recommended way to use DiskDocValuesFormat in production if we
can't reindex when we upgrade?
Will the 4.4 version of DDVF be backwards compatible, or should we make our
own copy of DDVF and give it a different codec name to protect ourselves
against incompatible changes?
Thanks,
Sea
I had this same problem using a library that used a new codec for lucene.
In creating the jar maven was replacing the META-INF. I solved the problem
with the following configuration in pom.xml
META-INF/services/org.apache.lucene.codecs.Codec
Hi,
It is numeric order according to the rules of Type#compareTo(Type) [Type =
Integer, Long, Float, Double]. But be careful: unless the precision step of the
numeric field is infinite, there are additional helper terms after the largest
numeric value. You can differentiate them on the term pre
Hi GD,
No idea why maven can't work, or the project structure is a little complex?
Also sure that lucene 2.9 has no problem with this because it does not
depend on SPI.
First you need to check the fat jar file and the file
META-INF/services/o.a.l.codecs.Codec and its counterparts are there.
If n
Hello.
I am looking for a way to compare web pages of products using various
fields in the query. Anyone know how I should set up a query for this? 'm
New to Lucene. I have to use MultiTermQuery?
Adriano
Some share for this topic.
QueryParser queryParser = new QueryParser(Version.LUCENE_30, "my_field",
new StandardAnalyzer(Version.LUCENE_30));
Query prefixQuery = queryParser.parse("t*");
indexSearcher.search(prefixQuery, collector);
MultiTermQuery.default(forgot the name) rewriter will be used, if
If we traverse a string field use code below, the value order is string
older.
Terms terms = reader.terms(“strField");
if (terms != null) {
TermsEnum termsEnum = terms.iterator(null);
BytesRef text;
while ((text = termsEnum.next()) != null)
How about numeric field. Int
Hello.
My main aim is following:
a. Index both on line and doc basis (Line basis for providing phrase
suggestions/infix suggestions. Doc basis for Firing
booleanquery/wildcard query etc.)
b. Yeah for boolean/wildcard etc user input will be "xxx" and "yyy" and
I will show document name.
c. W
On 08/21/2013 09:51 AM, Ankit Murarka wrote:
> Yeah..I eventually DID THIS
>
> Just a small question : Knowing that BooleanQuery/PrefixQuery/WildCardQuery
> might also run fine even if I index the complete document as opposed to doing
> it Line by Line. Shouldn't I do it this way rather tha
Yeah..I eventually DID THIS
Just a small question : Knowing that
BooleanQuery/PrefixQuery/WildCardQuery might also run fine even if I
index the complete document as opposed to doing it Line by Line.
Shouldn't I do it this way rather than indexing each line for
Boolean/Prefix/Wildcard als
On 08/21/2013 08:38 AM, Ankit Murarka wrote:
> Hello.
> I tried with
>
> doc.add(new Field("contents",line,Field.Store.YES,Field.Index.ANALYZED));
>
> The BooleanQuery/PrefixMatch/WildCard all started Running fine..
>
> But it broke the Existing code for Phrase Suggestion/InfixSuggester. Now
>
19 matches
Mail list logo