[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368447 ]
Yonik Seeley commented on LUCENE-505:
-
>We're still using TermScorer, which generates the fakeNorms() regardless of
>omitNorms on or off.
Let me focus on that point for t
[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368443 ]
Steven Tamm commented on LUCENE-505:
We're still using TermScorer, which generates the fakeNorms() regardless of
omitNorms on or off. ConstantTermScorer is a step in the
[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368440 ]
Yonik Seeley commented on LUCENE-505:
-
> One can now omit norms when indexing, and, if such a field is searched with a
> normal query then fakeNorms will be used.
> But a
[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368437 ]
Yonik Seeley commented on LUCENE-505:
-
> I made the change less for MultiReader, but to prevent the instantiation of
> the fakeNorms array (which is an extra 1MB of useles
[ http://issues.apache.org/jira/browse/LUCENE-505?page=all ]
Steven Tamm updated LUCENE-505:
---
Attachment: LazyNorms.patch
Here's a patch where if you turn LOAD_NORMS_INTO_MEM to false, it will instead
load the norms from the disk all the time. When combi
SegmentTermEnum.next() doesn't maintain prevBuffer at end
-
Key: LUCENE-508
URL: http://issues.apache.org/jira/browse/LUCENE-508
Project: Lucene - Java
Type: Bug
Components: Index
Versions: 1.9, 2.0
[
http://issues.apache.org/jira/browse/LUCENE-507?page=comments#action_12368403 ]
Steven Tamm commented on LUCENE-507:
In Lucene 1.9, there are a lot of local variable and unused import warnings.
> CLONE -[PATCH] remove unused variables
> --
CLONE -[PATCH] remove unused variables
--
Key: LUCENE-507
URL: http://issues.apache.org/jira/browse/LUCENE-507
Project: Lucene - Java
Type: Improvement
Components: Search
Versions: unspecified
Environment: Operating Sy
[ http://issues.apache.org/jira/browse/LUCENE-506?page=all ]
Steven Tamm updated LUCENE-506:
---
Attachment: Prefetching.patch
This also includes two additional test cases. The public exposure to the
prefetching is controlled solely by IndexReader.open(Dire
Optimize Memory Use for Short-Lived Indexes (Do not load TermInfoIndex if you
know the queries ahead of time)
-
Key: LUCENE-506
URL: http://issues.apache.org/jira/browse/L
[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368389 ]
Steven Tamm commented on LUCENE-505:
> I also worry about performance with this change. Have you benchmarked this
> while searching large indexes?
yes. see below.
> Fo
[ http://issues.apache.org/jira/browse/LUCENE-505?page=all ]
Doug Cutting updated LUCENE-505:
Version: 2.0
(was: 1.9)
I don't see how the memory requirements of MultiReader are twice that of
SegmentReader. MultiReader does not call nor
[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368381 ]
Steven Tamm commented on LUCENE-505:
I made the change less for MultiReader, but to prevent the instantiation of the
fakeNorms array (which is an extra 1MB of useless memo
[
http://issues.apache.org/jira/browse/LUCENE-505?page=comments#action_12368378 ]
Yonik Seeley commented on LUCENE-505:
-
> MultiReader.norms() is very inefficient: it has to construct a byte array
> that's as long as all the documents in every
> segment
[ http://issues.apache.org/jira/browse/LUCENE-505?page=all ]
Steven Tamm updated LUCENE-505:
---
Attachment: NormFactors.patch
Sorry, I didn't remove whitespace in the previous patch. This one's easier to
read.
svn diff --diff-cmd diff -x "-b -u" works bet
[ http://issues.apache.org/jira/browse/LUCENE-505?page=all ]
Steven Tamm updated LUCENE-505:
---
Attachment: NormFactors.patch
This patch doesn't include my previous change to TermScorer. It passes all of
the lucene unit tests in addition to our set of tes
MultiReader.norm() takes up too much memory: norms byte[] should be made into
an Object
---
Key: LUCENE-505
URL: http://issues.apache.org/jira/browse/LUCENE-505
Project: Lucene - Java
: distribution, we should start documenting their changes. I suggest that we
: add a file contrib/CHANGES.txt. This way we don't pollute the top-level
: changes file. Having one changes file per contrib project on the other
: hand makes it more difficult to get an overview, so one in contrib seems
[
http://issues.apache.org/jira/browse/LUCENE-500?page=comments#action_12368358 ]
Doug Cutting commented on LUCENE-500:
-
+1
This looks like a good start towards 2.0.
> Lucene 2.0 requirements - Remove all deprecated code
> -
Hi,
as most parts of the contrib area are now part of the official Lucene
distribution, we should start documenting their changes. I suggest that we
add a file contrib/CHANGES.txt. This way we don't pollute the top-level
changes file. Having one changes file per contrib project on the other
ha
On Mittwoch 01 März 2006 16:21, DM Smith wrote:
> I find that 1.9 reads my 1.4.3 built indexes just fine. But not the
> other way around.
That's exactly how it is supposed to be.
Regards
Daniel
--
http://www.danielnaber.de
-
On Wed, 1 Mar 2006, Doug Cutting wrote:
It does correspond to a precise SVN version, but what we prefer is a tag.
The tag for 1.9-final is:
http://svn.apache.org/repos/asf/lucene/java/tags/lucene_1_9_final/
Tags should never be revised. If you're paranoid, then you could also note
the revi
Andi Vajda wrote:
It would seem to me that the source code snapshot that is made to
release 'official' source and binary tarballs on the Lucene website
should correspond to a precise svn version
It does correspond to a precise SVN version, but what we prefer is a
tag. The tag for 1.9-final i
[ http://issues.apache.org/jira/browse/LUCENE-504?page=all ]
Joerg Henss updated LUCENE-504:
---
Attachment: TestFuzzyQueryError.java
Simple test showing the error
> FuzzyQuery produces a "java.lang.NegativeArraySizeException" in
> PriorityQueue.initialize
I have just upgraded to 1.9-final and am now testing my use of it.
One question regarding compatibility.
Does 1.4.3 search 1.9-final built indexes?
I find that 1.9 reads my 1.4.3 built indexes just fine. But not the
other way around.
-
FuzzyQuery produces a "java.lang.NegativeArraySizeException" in
PriorityQueue.initialize if I use Integer.MAX_VALUE as
BooleanQuery.MaxClauseCount
--
Jörg,
could you please add this to JIRA so that things don't get lost. If you
have a patch and/or a testcase showing the problem, it would be great if
you append it to JIRA also.
thanks,
Bernhard
Jörg Henß wrote:
Hi,
FuzzyQuery produces a "java.lang.NegativeArraySizeException" in
PriorityQ
Hi,
FuzzyQuery produces a "java.lang.NegativeArraySizeException" in
PriorityQueue.initialize if I use Integer.MAX_VALUE as
BooleanQuery.MaxClauseCount. This is because it adds 1 to MaxClauseCount and
tries to allocate an Array of this Size (I think it overflows to MIN_VALUE).
Usually nobody needs s
Contrib: ThaiAnalyzer to enable Thai full-text search in Lucene
---
Key: LUCENE-503
URL: http://issues.apache.org/jira/browse/LUCENE-503
Project: Lucene - Java
Type: New Feature
Components: Analysis
29 matches
Mail list logo