[jira] Commented: (LUCENE-1815) Geohash encode/decode floating point problems

2009-12-09 Thread Wouter Heijke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12787973#action_12787973
 ] 

Wouter Heijke commented on LUCENE-1815:
---

I'm happily using now for some time:

http://code.google.com/p/geospatialweb/source/browse/trunk/geohash/src/Geohash.java


> Geohash encode/decode floating point problems
> -
>
> Key: LUCENE-1815
> URL: https://issues.apache.org/jira/browse/LUCENE-1815
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/spatial
>Affects Versions: 2.9
>Reporter: Wouter Heijke
>Priority: Minor
>
> i'm finding the Geohash support in the spatial package to be rather 
> unreliable.
> Here is the outcome of a test that encodes/decodes the same lat/lon and 
> geohash a few times.
> the format:
> action geohash=(latitude, longitude)
> the result:
> encode u173zq37x014=(52.3738007,4.8909347)
> decode u173zq37x014=(52.3737996,4.890934)
> encode u173zq37rpbw=(52.3737996,4.890934)
> decode u173zq37rpbw=(52.3737996,4.89093295)
> encode u173zq37qzzy=(52.3737996,4.89093295)
> if I now change to the google code implementation:
> encode u173zq37x014=(52.3738007,4.8909347)
> decode u173zq37x014=(52.37380061298609,4.890934377908707)
> encode u173zq37x014=(52.37380061298609,4.890934377908707)
> decode u173zq37x014=(52.37380061298609,4.890934377908707)
> encode u173zq37x014=(52.37380061298609,4.890934377908707)
> Note the differences between the geohashes in both situations and the 
> lat/lon's!
> Now things get worse if you work on low-precision geohashes:
> decode u173=(52.0,4.0)
> encode u14zg429yy84=(52.0,4.0)
> decode u14zg429yy84=(52.0,3.99)
> encode u14zg429ywx6=(52.0,3.99)
> and google:
> decode u173=(52.20703125,4.5703125)
> encode u173=(52.20703125,4.5703125)
> decode u173=(52.20703125,4.5703125)
> encode u173=(52.20703125,4.5703125)
> We are using geohashes extensively and will now use the google code version 
> unfortunately.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery

2009-12-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12787989#action_12787989
 ] 

Uwe Schindler commented on LUCENE-2089:
---

Cool! The code looks quite simple (but maybe this is because of n=1). But 
FuzzyQuery with n>1 are used seldom, or not? And how slow it is?

> explore using automaton for fuzzyquery
> --
>
> Key: LUCENE-2089
> URL: https://issues.apache.org/jira/browse/LUCENE-2089
> Project: Lucene - Java
>  Issue Type: Wish
>  Components: Search
>Reporter: Robert Muir
>Assignee: Mark Miller
>Priority: Minor
> Attachments: Moman-0.2.1.tar.gz, TestFuzzy.java
>
>
> Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is 
> itching to write that nasty algorithm)
> we can optimize fuzzyquery by using AutomatonTermEnum, here is my idea
> * up front, calculate the maximum required K edits needed to match the users 
> supplied float threshold.
> * for at least common K (1,2,3, etc) we should use automatontermenum. if its 
> outside of that, maybe use the existing slow logic. At high K, it will seek 
> too much to be helpful anyway.
> i modified my wildcard benchmark to generate random fuzzy queries.
> * Pattern: 7N stands for NNN, etc.
> * AvgMS_DFA: this is the time spent creating the automaton (constructor)
> ||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
> |7N|10|64.0|4155.9|38.6|20.3|
> |14N|10|0.0|2511.6|46.0|37.9| 
> |28N|10|0.0|2506.3|93.0|86.6|
> |56N|10|0.0|2524.5|304.4|298.5|
> as you can see, this prototype is no good yet, because it creates the DFA in 
> a slow way. right now it creates an NFA, and all this wasted time is in 
> NFA->DFA conversion.
> So, for a very long string, it just gets worse and worse. This has nothing to 
> do with lucene, and here you can see, the TermEnum is fast (AvgMS - 
> AvgMS_DFA), there is no problem there.
> instead we should just build a DFA to begin with, maybe with this paper: 
> http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
> we can precompute the tables with that algorithm up to some reasonable K, and 
> then I think we are ok.
> the paper references using http://portal.acm.org/citation.cfm?id=135907 for 
> linear minimization, if someone wants to implement this they should not worry 
> about minimization.
> in fact, we need to at some point determine if AutomatonQuery should even 
> minimize FSM's at all, or if it is simply enough for them to be deterministic 
> with no transitions to dead states. (The only code that actually assumes 
> minimal DFA is the "Dumb" vs "Smart" heuristic and this can be rewritten as a 
> summation easily). we need to benchmark really complex DFAs (i.e. write a 
> regex benchmark) to figure out if minimization is even helping right now.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



RE: [VOTE] Push fast-vector-highlighter mvn artifacts for 3.0.0 and 2.9.1

2009-12-09 Thread Uwe Schindler
Hi all,

The missing maven artifacts for the fast-vector-highlighter contrib of
Lucene Java in version 2.9.1 and 3.0.0 are now available at:

http://repo1.maven.org/maven2/org/apache/lucene/
http://repo2.maven.org/maven2/org/apache/lucene/

Uwe

-
Uwe Schindler
uschind...@apache.org 
Apache Lucene Java Committer
Bremen, Germany
http://lucene.apache.org/java/docs/

> From: Uwe Schindler [mailto:u...@thetaphi.de]
> Sent: Tuesday, December 08, 2009 10:41 PM
> To: java-dev@lucene.apache.org; gene...@lucene.apache.org
> Subject: RE: [VOTE] Push fast-vector-highlighter mvn artifacts for 3.0.0
> and 2.9.1
> 
> I got 3 binding votes from Grant, Mike, and Ted (and one from Simon, who
> was
> a big help on Sunday evening when I created the artifacts), so I push the
> maven artifacts onto the rsync repo in few minutes.
> 
> Thanks!
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> > -Original Message-
> > From: Uwe Schindler [mailto:u...@thetaphi.de]
> > Sent: Tuesday, December 08, 2009 7:03 PM
> > To: java-dev@lucene.apache.org
> > Subject: [VOTE] Push fast-vector-highlighter mvn artifacts for 3.0.0 and
> > 2.9.1
> >
> > Sorry,
> >
> > I initially didn't want to start a vote, as Grant only proposed to
> "maybe
> > start one". But nobody responded (esp. to the questions in this mail) I
> > ask
> > again, an I will start the vote for now.
> >
> >
> ==
> > ==
> > Please vote, that the missing artifacts for of fast-verctor-highlighter
> of
> > Lucene Java 2.9.1 and 3.0.0 should be pushed to repoX.maven.org.
> >
> > You can find the artifacts here:
> > http://people.apache.org/~uschindler/staging-area/
> >
> > This dir contains only the maven folder to be copied to maven-rsync dir
> on
> > p.a.o. The top-level version in the maven metadata is 3.0.0, which
> > conforms
> > to the current state on maven (so during merging both folders during
> > build,
> > I set preference to metadata.xml of 3.0.0).
> >
> > All files are signed by my PGP key (even the 2.9.1 ones; that release
> was
> > originally built by Mike McCandless).
> >
> ==
> > ==
> >
> > What I additionally found out until now (because Simon nagged me):
> >
> > If you compare the JAR files inside the binary ZIP file from the apache
> > archive and the JAR files directly published on maven (for the other
> > contribs), the MD5s/SHA1s are different even as they are created from
> the
> > same source code (because the timestamps inside the JAR are different,
> for
> > 2.9.1 another JDK compiler/platform was used). This interestingly does
> not
> > apply to lucene-core.jar in 3.0. Because of that I see no problem with
> > this
> > maven release, even that they are not the orginal JAR files from the
> > binary
> > distrib.
> >
> > What is not nice, is that the svn revision number in the manifest is
> > different, but else is exactly the same, see my comments below in
> earlier
> > mails about changing the ant script for showing the SVN rev of the last
> > changed file.
> >
> > So if nobody objects to release these rebuild jar files, all signed by
> my
> > key, I would like to simply put them on the maven-rsync folder.
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> > > -Original Message-
> > > From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
> > > Sent: Tuesday, December 08, 2009 6:48 PM
> > > To: java-dev@lucene.apache.org
> > > Subject: Re: (NAG) Push fast-vector-highlighter mvn artifacts for 3.0
> > and
> > > 2.9
> > >
> > >
> > > : What to do now, any votes on adding the missing maven artifacts for
> > > : fast-vector-highlighter to 2.9.1 and 3.0.0 on the apache maven
> > > reposititory?
> > >
> > > It's not even clear to me that anything special needs to be done
> before
> > > publishing those jars to maven.  2.9.1 and 3.0.0 were already voted on
> > and
> > > released -- including all of the source code in them.
> > >
> > > The safest bet least likely to anger the process gods is just to call
> a
> > > vote (new thread with VOTE in the subject) and cast a vote ...
> > considering
> > > the sources has already been reviewed it should go pretty quick.
> > >
> > > :
> > > : > I rebuilt the maven-dir for 2.9.1 and 3.0.0, merged them (3.0.0 is
> > > top-
> > > : > level
> > > : > version) and extracted only fast-vector-highlighter:
> > > : >
> > > : > http://people.apache.org/~uschindler/staging-area/
> > > : >
> > > : > I will copy this dir to the maven folder on people.a.o, when I got
> > > votes
> > > : > (how many)? At least someone should check the signatures.
> > > : >
> > > : > By the way, we have a small error in our ant build.xml that
> inserts
> > > : > svnversion into the manifest file. This version is not the version

[jira] Commented: (LUCENE-2126) Split up IndexInput and IndexOutput into DataInput and DataOutput

2009-12-09 Thread Michael Busch (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788001#action_12788001
 ] 

Michael Busch commented on LUCENE-2126:
---

The main reason why I'd like to separate DataInput/Output from 
IndexInput/Output now is LUCENE-2125. Users should be able to implement methods 
that serialize/deserialize attributes into/from a postinglist. These methods 
should only be able to call the read/write methods (which this issue moves to 
DataInput/Output), but not methods like close(), seek() etc.. 

Thanks for spending time reviewing this and giving feedback from Lucy land, 
Marvin!
I think I will go ahead and commit this, and once we see a need to allow users 
to extend DataInput/Output outside of Lucene we can go ahead and make the 
additional changes that are mentioned in your in my comments here.

So I will commit this tomorrow if nobody objects.

> Split up IndexInput and IndexOutput into DataInput and DataOutput
> -
>
> Key: LUCENE-2126
> URL: https://issues.apache.org/jira/browse/LUCENE-2126
> Project: Lucene - Java
>  Issue Type: Improvement
>Affects Versions: Flex Branch
>Reporter: Michael Busch
>Assignee: Michael Busch
>Priority: Minor
> Fix For: Flex Branch
>
> Attachments: lucene-2126.patch
>
>
> I'd like to introduce the two new classes DataInput and DataOutput
> that contain all methods from IndexInput and IndexOutput that actually
> decode or encode data, such as readByte()/writeByte(),
> readVInt()/writeVInt().
> Methods like getFilePointer(), seek(), close(), etc., which are not
> related to data encoding, but to files as input/output source stay in
> IndexInput/IndexOutput.
> This patch also changes ByteSliceReader/ByteSliceWriter to extend
> DataInput/DataOutput. Previously ByteSliceReader implemented the
> methods that stay in IndexInput by throwing RuntimeExceptions.
> See also LUCENE-2125.
> All tests pass.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2138) Allow custom index readers when using IndexWriter.getReader

2009-12-09 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788018#action_12788018
 ] 

Michael McCandless commented on LUCENE-2138:


Could we maybe instead factor out ReaderPool from IW, and somehow enable this 
extensibility, there?

This would be the first step in LUCENE-2026, I guess.

The mergedSegmentWarmer should then also go into ReaderPool.


> Allow custom index readers when using IndexWriter.getReader
> ---
>
> Key: LUCENE-2138
> URL: https://issues.apache.org/jira/browse/LUCENE-2138
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Affects Versions: 3.0
>Reporter: Jason Rutherglen
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2138.patch
>
>
> This is needed for backwards compatible support with Solr, and is a spin-off 
> from SOLR-1606.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2107) Add contrib/fast-vector-highlighter to Maven central repo

2009-12-09 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-2107.
-

   Resolution: Fixed
Fix Version/s: 2.9.1
   3.0

The missing maven artifacts for the fast-vector-highlighter contrib of
Lucene Java in version 2.9.1 and 3.0.0 are now available at:

http://repo1.maven.org/maven2/org/apache/lucene/
http://repo2.maven.org/maven2/org/apache/lucene/


> Add contrib/fast-vector-highlighter to Maven central repo
> -
>
> Key: LUCENE-2107
> URL: https://issues.apache.org/jira/browse/LUCENE-2107
> Project: Lucene - Java
>  Issue Type: Task
>  Components: contrib/*
>Affects Versions: 2.9.1, 3.0
>Reporter: Chas Emerick
>Assignee: Simon Willnauer
> Fix For: 3.0, 2.9.1
>
> Attachments: LUCENE-2107.patch
>
>
> I'm not at all familiar with the Lucene build/deployment process, but it 
> would be very nice if releases of the fast vector highlighter were pushed to 
> the maven central repository, as is done with other contrib modules.
> (Issue filed at the request of Grant Ingersoll.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



Re: [VOTE] Push fast-vector-highlighter mvn artifacts for 3.0.0 and 2.9.1

2009-12-09 Thread Simon Willnauer
nice - I closed the issue.
thanks uwe

On Wed, Dec 9, 2009 at 10:28 AM, Uwe Schindler  wrote:
> Hi all,
>
> The missing maven artifacts for the fast-vector-highlighter contrib of
> Lucene Java in version 2.9.1 and 3.0.0 are now available at:
>
> http://repo1.maven.org/maven2/org/apache/lucene/
> http://repo2.maven.org/maven2/org/apache/lucene/
>
> Uwe
>
> -
> Uwe Schindler
> uschind...@apache.org
> Apache Lucene Java Committer
> Bremen, Germany
> http://lucene.apache.org/java/docs/
>
>> From: Uwe Schindler [mailto:u...@thetaphi.de]
>> Sent: Tuesday, December 08, 2009 10:41 PM
>> To: java-dev@lucene.apache.org; gene...@lucene.apache.org
>> Subject: RE: [VOTE] Push fast-vector-highlighter mvn artifacts for 3.0.0
>> and 2.9.1
>>
>> I got 3 binding votes from Grant, Mike, and Ted (and one from Simon, who
>> was
>> a big help on Sunday evening when I created the artifacts), so I push the
>> maven artifacts onto the rsync repo in few minutes.
>>
>> Thanks!
>>
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>> > -Original Message-
>> > From: Uwe Schindler [mailto:u...@thetaphi.de]
>> > Sent: Tuesday, December 08, 2009 7:03 PM
>> > To: java-dev@lucene.apache.org
>> > Subject: [VOTE] Push fast-vector-highlighter mvn artifacts for 3.0.0 and
>> > 2.9.1
>> >
>> > Sorry,
>> >
>> > I initially didn't want to start a vote, as Grant only proposed to
>> "maybe
>> > start one". But nobody responded (esp. to the questions in this mail) I
>> > ask
>> > again, an I will start the vote for now.
>> >
>> >
>> ==
>> > ==
>> > Please vote, that the missing artifacts for of fast-verctor-highlighter
>> of
>> > Lucene Java 2.9.1 and 3.0.0 should be pushed to repoX.maven.org.
>> >
>> > You can find the artifacts here:
>> > http://people.apache.org/~uschindler/staging-area/
>> >
>> > This dir contains only the maven folder to be copied to maven-rsync dir
>> on
>> > p.a.o. The top-level version in the maven metadata is 3.0.0, which
>> > conforms
>> > to the current state on maven (so during merging both folders during
>> > build,
>> > I set preference to metadata.xml of 3.0.0).
>> >
>> > All files are signed by my PGP key (even the 2.9.1 ones; that release
>> was
>> > originally built by Mike McCandless).
>> >
>> ==
>> > ==
>> >
>> > What I additionally found out until now (because Simon nagged me):
>> >
>> > If you compare the JAR files inside the binary ZIP file from the apache
>> > archive and the JAR files directly published on maven (for the other
>> > contribs), the MD5s/SHA1s are different even as they are created from
>> the
>> > same source code (because the timestamps inside the JAR are different,
>> for
>> > 2.9.1 another JDK compiler/platform was used). This interestingly does
>> not
>> > apply to lucene-core.jar in 3.0. Because of that I see no problem with
>> > this
>> > maven release, even that they are not the orginal JAR files from the
>> > binary
>> > distrib.
>> >
>> > What is not nice, is that the svn revision number in the manifest is
>> > different, but else is exactly the same, see my comments below in
>> earlier
>> > mails about changing the ant script for showing the SVN rev of the last
>> > changed file.
>> >
>> > So if nobody objects to release these rebuild jar files, all signed by
>> my
>> > key, I would like to simply put them on the maven-rsync folder.
>> >
>> > Uwe
>> >
>> > -
>> > Uwe Schindler
>> > H.-H.-Meier-Allee 63, D-28213 Bremen
>> > http://www.thetaphi.de
>> > eMail: u...@thetaphi.de
>> >
>> >
>> > > -Original Message-
>> > > From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
>> > > Sent: Tuesday, December 08, 2009 6:48 PM
>> > > To: java-dev@lucene.apache.org
>> > > Subject: Re: (NAG) Push fast-vector-highlighter mvn artifacts for 3.0
>> > and
>> > > 2.9
>> > >
>> > >
>> > > : What to do now, any votes on adding the missing maven artifacts for
>> > > : fast-vector-highlighter to 2.9.1 and 3.0.0 on the apache maven
>> > > reposititory?
>> > >
>> > > It's not even clear to me that anything special needs to be done
>> before
>> > > publishing those jars to maven.  2.9.1 and 3.0.0 were already voted on
>> > and
>> > > released -- including all of the source code in them.
>> > >
>> > > The safest bet least likely to anger the process gods is just to call
>> a
>> > > vote (new thread with VOTE in the subject) and cast a vote ...
>> > considering
>> > > the sources has already been reviewed it should go pretty quick.
>> > >
>> > > :
>> > > : > I rebuilt the maven-dir for 2.9.1 and 3.0.0, merged them (3.0.0 is
>> > > top-
>> > > : > level
>> > > : > version) and extracted only fast-vector-highlighter:
>> > > : >
>> > > : > http://people.apache.org/~uschindler/staging-area/
>> > > : >
>> > > : > I will copy this dir to the maven folder on people.a.o, when I got
>

[jira] Created: (LUCENE-2139) Cleanup and Improvement of Spatial Contrib

2009-12-09 Thread Chris Male (JIRA)
Cleanup and Improvement of Spatial Contrib
--

 Key: LUCENE-2139
 URL: https://issues.apache.org/jira/browse/LUCENE-2139
 Project: Lucene - Java
  Issue Type: Improvement
  Components: contrib/spatial
Affects Versions: 3.1
Reporter: Chris Male


The current spatial contrib can be improved by adding documentation, tests, 
removing unused classes and code, repackaging the classes and improving the 
performance of the distance filtering.  The latter will incorporate the 
multi-threaded functionality introduced in LUCENE-1732.  

Other improvements involve adding better support for different distance units, 
different distance calculators and different data formats (whether it be 
lat/long fields, geohashes, or something else in the future).

Patch to be added soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Assigned: (LUCENE-2139) Cleanup and Improvement of Spatial Contrib

2009-12-09 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-2139:
---

Assignee: Simon Willnauer

> Cleanup and Improvement of Spatial Contrib
> --
>
> Key: LUCENE-2139
> URL: https://issues.apache.org/jira/browse/LUCENE-2139
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/spatial
>Affects Versions: 3.1
>Reporter: Chris Male
>Assignee: Simon Willnauer
>
> The current spatial contrib can be improved by adding documentation, tests, 
> removing unused classes and code, repackaging the classes and improving the 
> performance of the distance filtering.  The latter will incorporate the 
> multi-threaded functionality introduced in LUCENE-1732.  
> Other improvements involve adding better support for different distance 
> units, different distance calculators and different data formats (whether it 
> be lat/long fields, geohashes, or something else in the future).
> Patch to be added soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2139) Cleanup and Improvement of Spatial Contrib

2009-12-09 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788034#action_12788034
 ] 

Simon Willnauer commented on LUCENE-2139:
-

cant wait to see you patch - its gonna be huge I guess :) 

I will be here to help you splitting it apart and get you good work into 
contrib/spatial

> Cleanup and Improvement of Spatial Contrib
> --
>
> Key: LUCENE-2139
> URL: https://issues.apache.org/jira/browse/LUCENE-2139
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/spatial
>Affects Versions: 3.1
>Reporter: Chris Male
>Assignee: Simon Willnauer
>
> The current spatial contrib can be improved by adding documentation, tests, 
> removing unused classes and code, repackaging the classes and improving the 
> performance of the distance filtering.  The latter will incorporate the 
> multi-threaded functionality introduced in LUCENE-1732.  
> Other improvements involve adding better support for different distance 
> units, different distance calculators and different data formats (whether it 
> be lat/long fields, geohashes, or something else in the future).
> Patch to be added soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-1512) Incorporate GeoHash in contrib/spatial

2009-12-09 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788037#action_12788037
 ] 

Simon Willnauer commented on LUCENE-1512:
-

Is this isssue still relevant? seems like it has been committted

> Incorporate GeoHash in contrib/spatial
> --
>
> Key: LUCENE-1512
> URL: https://issues.apache.org/jira/browse/LUCENE-1512
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/spatial
>Reporter: patrick o'leary
>Assignee: Ryan McKinley
>Priority: Minor
> Attachments: LUCENE-1512.patch, LUCENE-1512.patch
>
>
> Based on comments from Yonik and Ryan in SOLR-773 
> GeoHash provides the ability to store latitude / longitude values in a single 
> field consistent hash field.
> Which elements the need to maintain 2 field caches for latitude / longitude 
> fields, reducing the size of an index
> and the amount of memory needed for a spatial search.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2124) move JDK collation to core, ICU collation to ICU contrib

2009-12-09 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788040#action_12788040
 ] 

Simon Willnauer commented on LUCENE-2124:
-

Robert patch looks good to me!
Go for it!

> move JDK collation to core, ICU collation to ICU contrib
> 
>
> Key: LUCENE-2124
> URL: https://issues.apache.org/jira/browse/LUCENE-2124
> Project: Lucene - Java
>  Issue Type: Task
>  Components: contrib/*, Search
>Reporter: Robert Muir
>Assignee: Robert Muir
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2124.patch, LUCENE-2124.patch
>
>
> As mentioned on the list, I propose we move the JDK-based 
> CollationKeyFilter/CollationKeyAnalyzer, currently located in 
> contrib/collation into core for collation support (language-sensitive sorting)
> These are not much code (the heavy duty stuff is already in core, 
> IndexableBinaryString). 
> And I would also like to move the 
> ICUCollationKeyFilter/ICUCollationKeyAnalyzer (along with the jar file they 
> depend on) also currently located in contrib/collation into a contrib/icu.
> This way, we can start looking at integrating other functionality from ICU 
> into a fully-fleshed out icu contrib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-1512) Incorporate GeoHash in contrib/spatial

2009-12-09 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved LUCENE-1512.
-

   Resolution: Fixed
Fix Version/s: 2.9
Lucene Fields:   (was: [New])

> Incorporate GeoHash in contrib/spatial
> --
>
> Key: LUCENE-1512
> URL: https://issues.apache.org/jira/browse/LUCENE-1512
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: contrib/spatial
>Reporter: patrick o'leary
>Assignee: Ryan McKinley
>Priority: Minor
> Fix For: 2.9
>
> Attachments: LUCENE-1512.patch, LUCENE-1512.patch
>
>
> Based on comments from Yonik and Ryan in SOLR-773 
> GeoHash provides the ability to store latitude / longitude values in a single 
> field consistent hash field.
> Which elements the need to maintain 2 field caches for latitude / longitude 
> fields, reducing the size of an index
> and the amount of memory needed for a spatial search.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123-flex.patch

Here a refactoring of the rewrite modes in Flex. I'll port to trunk, too.

FuzzyQuery now uses per default TOP_TERMS_SCORING_BOOLEAN_REWRITE which is part 
of MTQ and can now also be used by e.g. MoreLikeThis.

> Highlighter fails to highlight FuzzyQuery
> -
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123-flex.patch

More refactoring. No also AUTO_REWRITE uses the new TermCollector. It gets less 
and less code.

> Highlighter fails to highlight FuzzyQuery
> -
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123-flex.patch

Now i also made the strange anonymous inner class a named inner class to get 
rid of the strange boolean holder, implemented by an array.

> Highlighter fails to highlight FuzzyQuery
> -
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2124) move JDK collation to core, ICU collation to ICU contrib

2009-12-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788054#action_12788054
 ] 

Robert Muir commented on LUCENE-2124:
-

Committed revision 888780.

I will keep this open until i regen the website and commit the changes.

> move JDK collation to core, ICU collation to ICU contrib
> 
>
> Key: LUCENE-2124
> URL: https://issues.apache.org/jira/browse/LUCENE-2124
> Project: Lucene - Java
>  Issue Type: Task
>  Components: contrib/*, Search
>Reporter: Robert Muir
>Assignee: Robert Muir
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2124.patch, LUCENE-2124.patch
>
>
> As mentioned on the list, I propose we move the JDK-based 
> CollationKeyFilter/CollationKeyAnalyzer, currently located in 
> contrib/collation into core for collation support (language-sensitive sorting)
> These are not much code (the heavy duty stuff is already in core, 
> IndexableBinaryString). 
> And I would also like to move the 
> ICUCollationKeyFilter/ICUCollationKeyAnalyzer (along with the jar file they 
> depend on) also currently located in contrib/collation into a contrib/icu.
> This way, we can start looking at integrating other functionality from ICU 
> into a fully-fleshed out icu contrib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2117) Fix SnowballAnalyzer casing behavior for Turkish Language

2009-12-09 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-2117.
-

Resolution: Fixed

committed in revision 888787

thanks robert

> Fix SnowballAnalyzer casing behavior for Turkish Language
> -
>
> Key: LUCENE-2117
> URL: https://issues.apache.org/jira/browse/LUCENE-2117
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/*
>Affects Versions: 3.0
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2117.patch, LUCENE-2117.patch
>
>
> LUCENE-2102 added a new TokenFilter to handle Turkish unique casing behavior 
> correctly. We should fix the casing behavior in SnowballAnalyzer too as it 
> supports a TurkishStemmer.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2104) IndexWriter.unlock does does nothing if NativeFSLockFactory is used

2009-12-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788077#action_12788077
 ] 

Shai Erera commented on LUCENE-2104:


I think that if I move those lines (in NativeFSLock.release()):
{code}
  if (!path.delete())
throw new LockReleaseFailedException("failed to delete " + path);
{code}
to outside the if(lockExists()) section, this should work? Because then the new 
NativeFSLock will attempt to release an lock that's held by someone else, and 
fail. If the lock exists for some reason, but nobody is holding it, that line 
should succeed.

In order to test it, I think I'll need to spawn two processes, which is 
trickier. Let me know what you think about the fix in the meantime.

> IndexWriter.unlock does does nothing if NativeFSLockFactory is used
> ---
>
> Key: LUCENE-2104
> URL: https://issues.apache.org/jira/browse/LUCENE-2104
> Project: Lucene - Java
>  Issue Type: Bug
>Reporter: Shai Erera
> Fix For: 3.1
>
>
> If NativeFSLockFactory is used, IndexWriter.unlock will return, silently 
> doing nothing. The reason is that NativeFSLockFactory's makeLock always 
> creates a new NativeFSLock. NativeFSLock's release first checks if its lock 
> is not null. However, only if obtain() is called, that lock is not null. So 
> release actually does nothing, and so IndexWriter.unlock does not delete the 
> lock, or fail w/ exception.
> This is only a problem in NativeFSLock, and not in other Lock 
> implementations, at least as I was able to see.
> Need to think first how to reproduce in a test, and then fix it. I'll work on 
> it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



Patch for LUCENE-2122 ready to go

2009-12-09 Thread Erick Erickson
Does someone with commit rights want to pick this up? I've incorporated the
changes suggested by Robert (Thanks!) and think it's ready to go.

Erick


[jira] Updated: (LUCENE-2100) Make contrib analyzers final

2009-12-09 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-2100:


Attachment: LUCENE-2100.patch

Updated to latest trunk

> Make contrib analyzers final
> 
>
> Key: LUCENE-2100
> URL: https://issues.apache.org/jira/browse/LUCENE-2100
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/analyzers
>Affects Versions: 1.9, 2.0.0, 2.1, 2.2, 2.3, 2.3.1, 2.3.2, 2.4, 2.4.1, 
> 2.9, 2.9.1, 3.0
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2100.patch, LUCENE-2100.patch
>
>
> The analyzers in contrib/analyzers should all be marked final. None of the 
> Analyzers should ever be subclassed - users should build their own analyzers 
> if a different combination of filters and Tokenizers is desired.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2100) Make contrib analyzers final

2009-12-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788084#action_12788084
 ] 

Robert Muir commented on LUCENE-2100:
-

patch looks good to me!

> Make contrib analyzers final
> 
>
> Key: LUCENE-2100
> URL: https://issues.apache.org/jira/browse/LUCENE-2100
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/analyzers
>Affects Versions: 1.9, 2.0.0, 2.1, 2.2, 2.3, 2.3.1, 2.3.2, 2.4, 2.4.1, 
> 2.9, 2.9.1, 3.0
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2100.patch, LUCENE-2100.patch
>
>
> The analyzers in contrib/analyzers should all be marked final. None of the 
> Analyzers should ever be subclassed - users should build their own analyzers 
> if a different combination of filters and Tokenizers is desired.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2100) Make contrib analyzers final

2009-12-09 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-2100.
-

Resolution: Fixed

committed in revision 888799

thanks robert for review

> Make contrib analyzers final
> 
>
> Key: LUCENE-2100
> URL: https://issues.apache.org/jira/browse/LUCENE-2100
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/analyzers
>Affects Versions: 1.9, 2.0.0, 2.1, 2.2, 2.3, 2.3.1, 2.3.2, 2.4, 2.4.1, 
> 2.9, 2.9.1, 3.0
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2100.patch, LUCENE-2100.patch
>
>
> The analyzers in contrib/analyzers should all be marked final. None of the 
> Analyzers should ever be subclassed - users should build their own analyzers 
> if a different combination of filters and Tokenizers is desired.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2124) move JDK collation to core, ICU collation to ICU contrib

2009-12-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-2124.
-

Resolution: Fixed

website updated in revision 03

> move JDK collation to core, ICU collation to ICU contrib
> 
>
> Key: LUCENE-2124
> URL: https://issues.apache.org/jira/browse/LUCENE-2124
> Project: Lucene - Java
>  Issue Type: Task
>  Components: contrib/*, Search
>Reporter: Robert Muir
>Assignee: Robert Muir
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2124.patch, LUCENE-2124.patch
>
>
> As mentioned on the list, I propose we move the JDK-based 
> CollationKeyFilter/CollationKeyAnalyzer, currently located in 
> contrib/collation into core for collation support (language-sensitive sorting)
> These are not much code (the heavy duty stuff is already in core, 
> IndexableBinaryString). 
> And I would also like to move the 
> ICUCollationKeyFilter/ICUCollationKeyAnalyzer (along with the jar file they 
> depend on) also currently located in contrib/collation into a contrib/icu.
> This way, we can start looking at integrating other functionality from ICU 
> into a fully-fleshed out icu contrib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2126) Split up IndexInput and IndexOutput into DataInput and DataOutput

2009-12-09 Thread Marvin Humphrey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788098#action_12788098
 ] 

Marvin Humphrey commented on LUCENE-2126:
-

> These methods should only be able to call the read/write methods (which this
> issue moves to DataInput/Output), but not methods like close(), seek() etc..

Ah, so that's what it is.  

In that case, let me vote my (non-binding) -1.  I don't believe that the
enforcement of such a restriction justifies the complexity cost of adding a
new class to the public API.

First, adding yet another class to the hierarchy steepens the learning curve
for users and contributors.  If you aren't in the rarefied echelon of
exceptional brilliance occupied by people named Michael who work for IBM :),
the gradual accumulation of complexity in the Lucene code base matters.  Inch
by inch, things move out of reach.

Second, changing things now for what seems to me like a minor reason makes it
harder to refactor the class hierarchy in the future when other, more
important reasons are inevitably discovered.

For LUCENE-2125, I recommend two possible options. 

  * Do nothing and assume that the sort of advanced user who writes a posting 
codec won't do something incredibly stupid like call indexInput.close().
  * Add a note to the docs for writing posting codecs indicating which sort of
of IO methods you ought not to call.

> once we see a need to allow users to extend DataInput/Output outside of
> Lucene we can go ahead and make the additional changes that are mentioned in
> your in my comments here.

In Lucy, there are three tiers of IO usage:

* For low-level IO, use FileHandle.
* For most applications, use InStream's encoder/decoder methods.
* For performance-critical inner-loop material (e.g. posting decoders, 
  SortCollector), access the raw memory-mapped IO buffer using
  InStream_Buf()/InStream_Advance_Buf() and use static inline functions 
  such as NumUtil_decode_c32 (which does no bounds checking) from
  Lucy::Util::NumberUtils.

While you can extend InStream to add a codec, that's not generally the best
way to go about it, because adding a method to InStream requires that all of
your users both use your InStream class and use a subclassed Folder which
overrides the Folder_Open_In() factory method (analogous to 
Directory.openInput()).  Better is to use the extension point provided by
InStream_Buf()/InStream_Advance_Buf() and write a utility function which
accepts an InStream as an argument.

I don't expect and am not advocating that Lucene adopt the same IO hierarchy
as Lucy, but I wanted to provide an example of other reasons why you might
change things.  (What I'd really like to see is for Lucene to come up with
something *better* than the Lucy IO hierarchy.)  

One of the reasons Lucene has so many backwards compatibility headaches is
because the public APIs are so extensive and thus constitute such an elaborate
set of backwards compatibility promises.  IMO, DataInput and DataOutput do 
not offer sufficient benefit to compensate for the increased intricacy they add 
to that backwards compatibility contract.


> Split up IndexInput and IndexOutput into DataInput and DataOutput
> -
>
> Key: LUCENE-2126
> URL: https://issues.apache.org/jira/browse/LUCENE-2126
> Project: Lucene - Java
>  Issue Type: Improvement
>Affects Versions: Flex Branch
>Reporter: Michael Busch
>Assignee: Michael Busch
>Priority: Minor
> Fix For: Flex Branch
>
> Attachments: lucene-2126.patch
>
>
> I'd like to introduce the two new classes DataInput and DataOutput
> that contain all methods from IndexInput and IndexOutput that actually
> decode or encode data, such as readByte()/writeByte(),
> readVInt()/writeVInt().
> Methods like getFilePointer(), seek(), close(), etc., which are not
> related to data encoding, but to files as input/output source stay in
> IndexInput/IndexOutput.
> This patch also changes ByteSliceReader/ByteSliceWriter to extend
> DataInput/DataOutput. Previously ByteSliceReader implemented the
> methods that stay in IndexInput by throwing RuntimeExceptions.
> See also LUCENE-2125.
> All tests pass.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Assigned: (LUCENE-2122) Use JUnit4 capabilites for more thorough Locale testing for classes deriving from LocalizedTestCase

2009-12-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir reassigned LUCENE-2122:
---

Assignee: Robert Muir  (was: Erick Erickson)

> Use JUnit4 capabilites for more thorough Locale testing for classes deriving 
> from LocalizedTestCase
> ---
>
> Key: LUCENE-2122
> URL: https://issues.apache.org/jira/browse/LUCENE-2122
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Other
>Affects Versions: 3.1
>Reporter: Erick Erickson
>Assignee: Robert Muir
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2122-r2.patch, LUCENE-2122-r3.patch, 
> LUCENE-2122.patch
>
>
> Use the @Parameterized capabilities of Junit4 to allow more extensive testing 
> of Locales.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2122) Use JUnit4 capabilites for more thorough Locale testing for classes deriving from LocalizedTestCase

2009-12-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788100#action_12788100
 ] 

Robert Muir commented on LUCENE-2122:
-

Hi Erick, in the Date tools test I think you can delete the public static 
Collection data(), I think you might have accidentally included it?


> Use JUnit4 capabilites for more thorough Locale testing for classes deriving 
> from LocalizedTestCase
> ---
>
> Key: LUCENE-2122
> URL: https://issues.apache.org/jira/browse/LUCENE-2122
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Other
>Affects Versions: 3.1
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2122-r2.patch, LUCENE-2122-r3.patch, 
> LUCENE-2122.patch
>
>
> Use the @Parameterized capabilities of Junit4 to allow more extensive testing 
> of Locales.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



Re: [jira] Commented: (LUCENE-2122) Use JUnit4 capabilites for more thorough Locale testing for classes deriving from LocalizedTestCase

2009-12-09 Thread Erick Erickson
Sh. I'll look at it again tonight

On Wed, Dec 9, 2009 at 9:13 AM, Robert Muir (JIRA)  wrote:

>
>[
> https://issues.apache.org/jira/browse/LUCENE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788100#action_12788100]
>
> Robert Muir commented on LUCENE-2122:
> -
>
> Hi Erick, in the Date tools test I think you can delete the public static
> Collection data(), I think you might have accidentally included
> it?
>
>
> > Use JUnit4 capabilites for more thorough Locale testing for classes
> deriving from LocalizedTestCase
> >
> ---
> >
> > Key: LUCENE-2122
> > URL: https://issues.apache.org/jira/browse/LUCENE-2122
> > Project: Lucene - Java
> >  Issue Type: Improvement
> >  Components: Other
> >Affects Versions: 3.1
> >Reporter: Erick Erickson
> >Assignee: Erick Erickson
> >Priority: Minor
> > Fix For: 3.1
> >
> > Attachments: LUCENE-2122-r2.patch, LUCENE-2122-r3.patch,
> LUCENE-2122.patch
> >
> >
> > Use the @Parameterized capabilities of Junit4 to allow more extensive
> testing of Locales.
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>
> -
> To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-dev-h...@lucene.apache.org
>
>


[jira] Updated: (LUCENE-2139) Cleanup and Improvement of Spatial Contrib

2009-12-09 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-2139:
---

Attachment: LUCENE-2139.patch

Added patch.

Couple of TODOs still noted in the patch related to distances.  We need to 
decide what distances we are going to use for the radius and circumference of 
the Earth and then use them in SpatialConstants.  Currently the 
SpatialConstants values are taken from Wikipedia and other sites, yet differ 
from some of the distances in the coded.

Also the patch doesn't seem to remove a couple of empty packages.  Too many 
changes in 1 patch confusing the IDE I think.  Help cleaning this up would be 
appreciated.

> Cleanup and Improvement of Spatial Contrib
> --
>
> Key: LUCENE-2139
> URL: https://issues.apache.org/jira/browse/LUCENE-2139
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/spatial
>Affects Versions: 3.1
>Reporter: Chris Male
>Assignee: Simon Willnauer
> Attachments: LUCENE-2139.patch
>
>
> The current spatial contrib can be improved by adding documentation, tests, 
> removing unused classes and code, repackaging the classes and improving the 
> performance of the distance filtering.  The latter will incorporate the 
> multi-threaded functionality introduced in LUCENE-1732.  
> Other improvements involve adding better support for different distance 
> units, different distance calculators and different data formats (whether it 
> be lat/long fields, geohashes, or something else in the future).
> Patch to be added soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2139) Cleanup and Improvement of Spatial Contrib

2009-12-09 Thread Chris Male (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788130#action_12788130
 ] 

Chris Male commented on LUCENE-2139:


I have also included LUCENE-1934 in this, and tried to include LUCENE-1930 but 
was unable to get 1930 to work.

> Cleanup and Improvement of Spatial Contrib
> --
>
> Key: LUCENE-2139
> URL: https://issues.apache.org/jira/browse/LUCENE-2139
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/spatial
>Affects Versions: 3.1
>Reporter: Chris Male
>Assignee: Simon Willnauer
> Attachments: LUCENE-2139.patch
>
>
> The current spatial contrib can be improved by adding documentation, tests, 
> removing unused classes and code, repackaging the classes and improving the 
> performance of the distance filtering.  The latter will incorporate the 
> multi-threaded functionality introduced in LUCENE-1732.  
> Other improvements involve adding better support for different distance 
> units, different distance calculators and different data formats (whether it 
> be lat/long fields, geohashes, or something else in the future).
> Patch to be added soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Description: 
As FuzzyQuery does not allow to change the rewrite mode, highlighter fails with 
UOE in flex since LUCENE-2110, because it changes the rewrite mode to Boolean 
query. The fix is: Allow MTQ to change rewrite method and make FUZZY_REWRITE 
public for that.

The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
the code will be refactored to make heavy reuse of term enumeration code and 
only plug in the PQ for filtering the top terms.

  was:
As FuzzyQuery does not allow to change the rewrite mode, highlighter fails with 
UOE in flex since LUCENE-2110, because it changes the rewrite mode to Boolean 
query. The fix is: Allow MTQ to change rewrite method and make FUZZY_REWRITE 
public for that.


Summary: Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: 
Highlighter fails to highlight FuzzyQuery  (was: Highlighter fails to highlight 
FuzzyQuery)

Trunk patch comes soon.

> Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter 
> fails to highlight FuzzyQuery
> 
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.
> The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
> the code will be refactored to make heavy reuse of term enumeration code and 
> only plug in the PQ for filtering the top terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123.patch
LUCENE-2123-flex.patch

Here the final patches with updated JavaDocs. I want to apply them in this form 
to trunk and flex. If nobody objects I will do this tomorrow.

WIth this patch, FuzzyQuery will always highlight correctly, as it can be 
switched to boolean query rewrite mode.

> Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter 
> fails to highlight FuzzyQuery
> 
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.
> The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
> the code will be refactored to make heavy reuse of term enumeration code and 
> only plug in the PQ for filtering the top terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2139) Cleanup and Improvement of Spatial Contrib

2009-12-09 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788169#action_12788169
 ] 

Simon Willnauer commented on LUCENE-2139:
-

Chris, I have a couple of issues with your patch. It seems that you renamed a 
couple of files which doesn't work well with patches for some reason. I will 
comment on this later again.
The other thing is that you use 1.6 classes like 
[http://java.sun.com/javase/6/docs/api/java/util/concurrent/LinkedBlockingDeque.html|LinkedBlockingDeque]
 but we should try to keep the contrib 1.5 dependent.
could you fix those 1.6 references please.

simon

> Cleanup and Improvement of Spatial Contrib
> --
>
> Key: LUCENE-2139
> URL: https://issues.apache.org/jira/browse/LUCENE-2139
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: contrib/spatial
>Affects Versions: 3.1
>Reporter: Chris Male
>Assignee: Simon Willnauer
> Attachments: LUCENE-2139.patch
>
>
> The current spatial contrib can be improved by adding documentation, tests, 
> removing unused classes and code, repackaging the classes and improving the 
> performance of the distance filtering.  The latter will incorporate the 
> multi-threaded functionality introduced in LUCENE-1732.  
> Other improvements involve adding better support for different distance 
> units, different distance calculators and different data formats (whether it 
> be lat/long fields, geohashes, or something else in the future).
> Patch to be added soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-1606) Automaton Query/Filter (scalable regex)

2009-12-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-1606.
-

   Resolution: Fixed
Fix Version/s: (was: 3.1)
   Flex Branch

Committed revision 91.

> Automaton Query/Filter (scalable regex)
> ---
>
> Key: LUCENE-1606
> URL: https://issues.apache.org/jira/browse/LUCENE-1606
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: Search
>Reporter: Robert Muir
>Assignee: Robert Muir
>Priority: Minor
> Fix For: Flex Branch
>
> Attachments: automaton.patch, automatonMultiQuery.patch, 
> automatonmultiqueryfuzzy.patch, automatonMultiQuerySmart.patch, 
> automatonWithWildCard.patch, automatonWithWildCard2.patch, 
> BenchWildcard.java, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606_nodep.patch
>
>
> Attached is a patch for an AutomatonQuery/Filter (name can change if its not 
> suitable).
> Whereas the out-of-box contrib RegexQuery is nice, I have some very large 
> indexes (100M+ unique tokens) where queries are quite slow, 2 minutes, etc. 
> Additionally all of the existing RegexQuery implementations in Lucene are 
> really slow if there is no constant prefix. This implementation does not 
> depend upon constant prefix, and runs the same query in 640ms.
> Some use cases I envision:
>  1. lexicography/etc on large text corpora
>  2. looking for things such as urls where the prefix is not constant (http:// 
> or ftp://)
> The Filter uses the BRICS package (http://www.brics.dk/automaton/) to convert 
> regular expressions into a DFA. Then, the filter "enumerates" terms in a 
> special way, by using the underlying state machine. Here is my short 
> description from the comments:
>  The algorithm here is pretty basic. Enumerate terms but instead of a 
> binary accept/reject do:
>   
>  1. Look at the portion that is OK (did not enter a reject state in the 
> DFA)
>  2. Generate the next possible String and seek to that.
> the Query simply wraps the filter with ConstantScoreQuery.
> I did not include the automaton.jar inside the patch but it can be downloaded 
> from http://www.brics.dk/automaton/ and is BSD-licensed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-1606) Automaton Query/Filter (scalable regex)

2009-12-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788189#action_12788189
 ] 

Robert Muir commented on LUCENE-1606:
-

btw, Thanks to Uwe, Mike, Mark for all the help here!


> Automaton Query/Filter (scalable regex)
> ---
>
> Key: LUCENE-1606
> URL: https://issues.apache.org/jira/browse/LUCENE-1606
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: Search
>Reporter: Robert Muir
>Assignee: Robert Muir
>Priority: Minor
> Fix For: Flex Branch
>
> Attachments: automaton.patch, automatonMultiQuery.patch, 
> automatonmultiqueryfuzzy.patch, automatonMultiQuerySmart.patch, 
> automatonWithWildCard.patch, automatonWithWildCard2.patch, 
> BenchWildcard.java, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, 
> LUCENE-1606-flex.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, 
> LUCENE-1606.patch, LUCENE-1606_nodep.patch
>
>
> Attached is a patch for an AutomatonQuery/Filter (name can change if its not 
> suitable).
> Whereas the out-of-box contrib RegexQuery is nice, I have some very large 
> indexes (100M+ unique tokens) where queries are quite slow, 2 minutes, etc. 
> Additionally all of the existing RegexQuery implementations in Lucene are 
> really slow if there is no constant prefix. This implementation does not 
> depend upon constant prefix, and runs the same query in 640ms.
> Some use cases I envision:
>  1. lexicography/etc on large text corpora
>  2. looking for things such as urls where the prefix is not constant (http:// 
> or ftp://)
> The Filter uses the BRICS package (http://www.brics.dk/automaton/) to convert 
> regular expressions into a DFA. Then, the filter "enumerates" terms in a 
> special way, by using the underlying state machine. Here is my short 
> description from the comments:
>  The algorithm here is pretty basic. Enumerate terms but instead of a 
> binary accept/reject do:
>   
>  1. Look at the portion that is OK (did not enter a reject state in the 
> DFA)
>  2. Generate the next possible String and seek to that.
> the Query simply wraps the filter with ConstantScoreQuery.
> I did not include the automaton.jar inside the patch but it can be downloaded 
> from http://www.brics.dk/automaton/ and is BSD-licensed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123.patch
LUCENE-2123-flex.patch

Here is the code as discussed on IRC:
It fixes the braindead LUCENE-504 code :-)

> Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter 
> fails to highlight FuzzyQuery
> 
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123.patch, LUCENE-2123.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.
> The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
> the code will be refactored to make heavy reuse of term enumeration code and 
> only plug in the PQ for filtering the top terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123.patch
LUCENE-2123-flex.patch

So the last patch for today.

I optimized the PQ to reuse the ScoreTerm instance when the PQ is full. I think 
for current FuzzyQuery the rewrite modes are now as best as possible. The tests 
pass that already test the overflow of the PQ by setting  BQ.maxClauseCount to 
a very low value.

> Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter 
> fails to highlight FuzzyQuery
> 
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123.patch, LUCENE-2123.patch, 
> LUCENE-2123.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.
> The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
> the code will be refactored to make heavy reuse of term enumeration code and 
> only plug in the PQ for filtering the top terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2039) Regex support and beyond in JavaCC QueryParser

2009-12-09 Thread David Kaelbling (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788340#action_12788340
 ] 

David Kaelbling commented on LUCENE-2039:
-

Currently the master parser doesn't pass settings down to the extension parsers 
(things like setAllowLeadingWildcard, setMultiTermRewriteMethod, etc.)   Should 
it?


> Regex support and beyond in JavaCC QueryParser
> --
>
> Key: LUCENE-2039
> URL: https://issues.apache.org/jira/browse/LUCENE-2039
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: QueryParser
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2039.patch, LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch
>
>
> Since the early days the standard query parser was limited to the queries 
> living in core, adding other queries or extending the parser in any way 
> always forced people to change the grammar file and regenerate. Even if you 
> change the grammar you have to be extremely careful how you modify the parser 
> so that other parts of the standard parser are affected by customisation 
> changes. Eventually you had to live with all the limitation the current 
> parser has like tokenizing on whitespaces before a tokenizer / analyzer has 
> the chance to look at the tokens. 
> I was thinking about how to overcome the limitation and add regex support to 
> the query parser without introducing any dependency to core. I added a new 
> special character that basically prevents the parser from interpreting any of 
> the characters enclosed in the new special characters. I choose the forward 
> slash  '/' as the delimiter so that everything in between two forward slashes 
> is basically escaped and ignored by the parser. All chars embedded within 
> forward slashes are treated as one token even if it contains other special 
> chars like * []?{} or whitespaces. This token is subsequently passed to a 
> pluggable "parser extension" with builds a query from the embedded string. I 
> do not interpret the embedded string in any way but leave all the subsequent 
> work to the parser extension. Such an extension could be another full 
> featured query parser itself or simply a ctor call for regex query. The 
> interface remains quiet simple but makes the parser extendible in an easy way 
> compared to modifying the javaCC sources.
> The downsides of this patch is clearly that I introduce a new special char 
> into the syntax but I guess that would not be that much of a deal as it is 
> reflected in the escape method though. It would truly be nice to have more 
> than once extension an have this even more flexible so treat this patch as a 
> kickoff though.
> Another way of solving the problem with RegexQuery would be to move the JDK 
> version of regex into the core and simply have another method like:
> {code}
> protected Query newRegexQuery(Term t) {
>   ... 
> }
> {code}
> which I would like better as it would be more consistent with the idea of the 
> query parser to be a very strict and defined parser.
> I will upload a patch in a second which implements the extension based 
> approach I guess I will add a second patch with regex in core soon too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Created: (LUCENE-2140) TopTermsScoringBooleanQueryRewrite minscore

2009-12-09 Thread Robert Muir (JIRA)
TopTermsScoringBooleanQueryRewrite minscore
---

 Key: LUCENE-2140
 URL: https://issues.apache.org/jira/browse/LUCENE-2140
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Search
Affects Versions: Flex Branch
Reporter: Robert Muir
Priority: Minor
 Fix For: Flex Branch


when using the TopTermsScoringBooleanQueryRewrite (LUCENE-2123), it would be 
nice if MultiTermQuery could set an attribute specifying the minimum required 
score once the Priority Queue is filled. 

This way, FilteredTermsEnums could adjust their behavior accordingly based on 
the minimal score needed to actually be a useful term (i.e. not just pass thru 
the pq)

An example is FuzzyTermsEnum: at some point the bottom of the priority queue 
contains words with edit distance of 1 and enumerating any further terms is 
simply a waste of time.
This is because terms are compared by score, then termtext. So in this case 
FuzzyTermsEnum could simply seek to the exact match, then end.

This behavior could be also generalized for all n, for a different impl of 
fuzzyquery where it is only looking in the term dictionary for words within 
edit distance of n' which is the lowest scoring term in the pq (they adjust 
their behavior during enumeration of the terms depending upon this attribute).

Other FilteredTermsEnums could make use of this minimal score in their own way, 
to drive the most efficient behavior so that they do not waste time enumerating 
useless terms.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2089) explore using automaton for fuzzyquery

2009-12-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2089:


Description: 
Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is 
itching to write that nasty algorithm)

we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
* up front, calculate the maximum required K edits needed to match the users 
supplied float threshold.
* for at least common N up to K (1,2,3, etc) we should create a DFA for each N. 

if the required K is above our supported DFA-based N, we use "dumb mode" at 
first (no seeking, no DFA, just brute force like now).
As the pq fills, we swap progressively lower DFAs into the enum, based upon the 
lowest score in the pq.
This should work well on avg, at high N, you will typically fill the pq very 
quickly since you will match many terms. 
This not only provides a mechanism to switch to more efficient DFAs during 
enumeration, but also to switch from "dumb mode" to "smart mode".

i modified my wildcard benchmark to generate random fuzzy queries.
* Pattern: 7N stands for NNN, etc.
* AvgMS_DFA: this is the time spent creating the automaton (constructor)

||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
|7N|10|64.0|4155.9|38.6|20.3|
|14N|10|0.0|2511.6|46.0|37.9|   
|28N|10|0.0|2506.3|93.0|86.6|
|56N|10|0.0|2524.5|304.4|298.5|

as you can see, this prototype is no good yet, because it creates the DFA in a 
slow way. right now it creates an NFA, and all this wasted time is in NFA->DFA 
conversion.
So, for a very long string, it just gets worse and worse. This has nothing to 
do with lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), 
there is no problem there.

instead we should just build a DFA to begin with, maybe with this paper: 
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
we can precompute the tables with that algorithm up to some reasonable K, and 
then I think we are ok.

the paper references using http://portal.acm.org/citation.cfm?id=135907 for 
linear minimization, if someone wants to implement this they should not worry 
about minimization.
in fact, we need to at some point determine if AutomatonQuery should even 
minimize FSM's at all, or if it is simply enough for them to be deterministic 
with no transitions to dead states. (The only code that actually assumes 
minimal DFA is the "Dumb" vs "Smart" heuristic and this can be rewritten as a 
summation easily). we need to benchmark really complex DFAs (i.e. write a regex 
benchmark) to figure out if minimization is even helping right now.



  was:
Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is 
itching to write that nasty algorithm)

we can optimize fuzzyquery by using AutomatonTermEnum, here is my idea
* up front, calculate the maximum required K edits needed to match the users 
supplied float threshold.
* for at least common K (1,2,3, etc) we should use automatontermenum. if its 
outside of that, maybe use the existing slow logic. At high K, it will seek too 
much to be helpful anyway.

i modified my wildcard benchmark to generate random fuzzy queries.
* Pattern: 7N stands for NNN, etc.
* AvgMS_DFA: this is the time spent creating the automaton (constructor)

||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
|7N|10|64.0|4155.9|38.6|20.3|
|14N|10|0.0|2511.6|46.0|37.9|   
|28N|10|0.0|2506.3|93.0|86.6|
|56N|10|0.0|2524.5|304.4|298.5|

as you can see, this prototype is no good yet, because it creates the DFA in a 
slow way. right now it creates an NFA, and all this wasted time is in NFA->DFA 
conversion.
So, for a very long string, it just gets worse and worse. This has nothing to 
do with lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), 
there is no problem there.

instead we should just build a DFA to begin with, maybe with this paper: 
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
we can precompute the tables with that algorithm up to some reasonable K, and 
then I think we are ok.

the paper references using http://portal.acm.org/citation.cfm?id=135907 for 
linear minimization, if someone wants to implement this they should not worry 
about minimization.
in fact, we need to at some point determine if AutomatonQuery should even 
minimize FSM's at all, or if it is simply enough for them to be deterministic 
with no transitions to dead states. (The only code that actually assumes 
minimal DFA is the "Dumb" vs "Smart" heuristic and this can be rewritten as a 
summation easily). we need to benchmark really complex DFAs (i.e. write a regex 
benchmark) to figure out if minimization is even helping right now.




> explore using automaton for fuzzyquery
> --
>
> Key: LUCENE-2089
> URL: https://issues.apache.org/jira/browse/LUCENE-2089
>  

[jira] Updated: (LUCENE-2089) explore using automaton for fuzzyquery

2009-12-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2089:


Description: 
Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is 
itching to write that nasty algorithm)

we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
* up front, calculate the maximum required K edits needed to match the users 
supplied float threshold.
* for at least small common E up to some max K (1,2,3, etc) we should create a 
DFA for each E. 

if the required E is above our supported max, we use "dumb mode" at first (no 
seeking, no DFA, just brute force like now).
As the pq fills, we swap progressively lower DFAs into the enum, based upon the 
lowest score in the pq.
This should work well on avg, at high E, you will typically fill the pq very 
quickly since you will match many terms. 
This not only provides a mechanism to switch to more efficient DFAs during 
enumeration, but also to switch from "dumb mode" to "smart mode".

i modified my wildcard benchmark to generate random fuzzy queries.
* Pattern: 7N stands for NNN, etc.
* AvgMS_DFA: this is the time spent creating the automaton (constructor)

||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
|7N|10|64.0|4155.9|38.6|20.3|
|14N|10|0.0|2511.6|46.0|37.9|   
|28N|10|0.0|2506.3|93.0|86.6|
|56N|10|0.0|2524.5|304.4|298.5|

as you can see, this prototype is no good yet, because it creates the DFA in a 
slow way. right now it creates an NFA, and all this wasted time is in NFA->DFA 
conversion.
So, for a very long string, it just gets worse and worse. This has nothing to 
do with lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), 
there is no problem there.

instead we should just build a DFA to begin with, maybe with this paper: 
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
we can precompute the tables with that algorithm up to some reasonable K, and 
then I think we are ok.

the paper references using http://portal.acm.org/citation.cfm?id=135907 for 
linear minimization, if someone wants to implement this they should not worry 
about minimization.
in fact, we need to at some point determine if AutomatonQuery should even 
minimize FSM's at all, or if it is simply enough for them to be deterministic 
with no transitions to dead states. (The only code that actually assumes 
minimal DFA is the "Dumb" vs "Smart" heuristic and this can be rewritten as a 
summation easily). we need to benchmark really complex DFAs (i.e. write a regex 
benchmark) to figure out if minimization is even helping right now.



  was:
Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is 
itching to write that nasty algorithm)

we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
* up front, calculate the maximum required K edits needed to match the users 
supplied float threshold.
* for at least common N up to K (1,2,3, etc) we should create a DFA for each N. 

if the required K is above our supported DFA-based N, we use "dumb mode" at 
first (no seeking, no DFA, just brute force like now).
As the pq fills, we swap progressively lower DFAs into the enum, based upon the 
lowest score in the pq.
This should work well on avg, at high N, you will typically fill the pq very 
quickly since you will match many terms. 
This not only provides a mechanism to switch to more efficient DFAs during 
enumeration, but also to switch from "dumb mode" to "smart mode".

i modified my wildcard benchmark to generate random fuzzy queries.
* Pattern: 7N stands for NNN, etc.
* AvgMS_DFA: this is the time spent creating the automaton (constructor)

||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
|7N|10|64.0|4155.9|38.6|20.3|
|14N|10|0.0|2511.6|46.0|37.9|   
|28N|10|0.0|2506.3|93.0|86.6|
|56N|10|0.0|2524.5|304.4|298.5|

as you can see, this prototype is no good yet, because it creates the DFA in a 
slow way. right now it creates an NFA, and all this wasted time is in NFA->DFA 
conversion.
So, for a very long string, it just gets worse and worse. This has nothing to 
do with lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), 
there is no problem there.

instead we should just build a DFA to begin with, maybe with this paper: 
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
we can precompute the tables with that algorithm up to some reasonable K, and 
then I think we are ok.

the paper references using http://portal.acm.org/citation.cfm?id=135907 for 
linear minimization, if someone wants to implement this they should not worry 
about minimization.
in fact, we need to at some point determine if AutomatonQuery should even 
minimize FSM's at all, or if it is simply enough for them to be deterministic 
with no transitions to dead states. (The only code that actually assumes 
minimal DFA is the "Dumb" v

[jira] Resolved: (LUCENE-2090) convert automaton to char[] based processing and TermRef / TermsEnum api

2009-12-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-2090.
-

   Resolution: Fixed
Fix Version/s: (was: 3.1)
   Flex Branch

i am marking this one resolved, the goals have been met (char[]/byte[] based 
processing and TermRef/TermsEnum api)


> convert automaton to char[] based processing and TermRef / TermsEnum api
> 
>
> Key: LUCENE-2090
> URL: https://issues.apache.org/jira/browse/LUCENE-2090
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Search
>Reporter: Robert Muir
>Priority: Minor
> Fix For: Flex Branch
>
> Attachments: LUCENE-2090_TermRef_flex.patch, 
> LUCENE-2090_TermRef_flex2.patch, LUCENE-2090_TermRef_flex3.patch
>
>
> The automaton processing is currently done with String, mostly because 
> TermEnum is based on String.
> it is easy to change the processing to work with char[], since behind the 
> scenes this is used anyway.
> in general I think we should make sure char[] based processing is exposed in 
> the automaton pkg anyway, for things like pattern-based tokenizers and such.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2138) Allow custom index readers when using IndexWriter.getReader

2009-12-09 Thread Jason Rutherglen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788449#action_12788449
 ] 

Jason Rutherglen commented on LUCENE-2138:
--

I'm curious, will flex indexing affect development on
LUCENE-2026? Do they overlap?

What's the use case for 2026? I thought about how it could help
with implementing LUCENE-1313, however these fairly large
changes, sometimes consume more time then their worth? I think
this patch, 2138 is simple enough to be included in 3.1 as is,
then if there's an itch to be scratched by implementing 2026,
2138 functionality is easy enough to add.

> Allow custom index readers when using IndexWriter.getReader
> ---
>
> Key: LUCENE-2138
> URL: https://issues.apache.org/jira/browse/LUCENE-2138
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Affects Versions: 3.0
>Reporter: Jason Rutherglen
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2138.patch
>
>
> This is needed for backwards compatible support with Solr, and is a spin-off 
> from SOLR-1606.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2122) Use JUnit4 capabilites for more thorough Locale testing for classes deriving from LocalizedTestCase

2009-12-09 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-2122:
---

Attachment: LUCENE-2122-r4.patch

OK, I plead advanced senility or some other excuse for the last patch.

Robert:
Thanks so much for looking this over, I have no clue what I was thinking with 
the TestDateTools. Or the other classes that derive from LocalizedTestCase.

The @Parameterized and @RunWith only needed to be in LocalizedTestCase and all 
the inheriting classes just rely on the base class to collect the different 
locales.

Anyway, this one should be much better

Erick

> Use JUnit4 capabilites for more thorough Locale testing for classes deriving 
> from LocalizedTestCase
> ---
>
> Key: LUCENE-2122
> URL: https://issues.apache.org/jira/browse/LUCENE-2122
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Other
>Affects Versions: 3.1
>Reporter: Erick Erickson
>Assignee: Robert Muir
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2122-r2.patch, LUCENE-2122-r3.patch, 
> LUCENE-2122-r4.patch, LUCENE-2122.patch
>
>
> Use the @Parameterized capabilities of Junit4 to allow more extensive testing 
> of Locales.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2122) Use JUnit4 capabilites for more thorough Locale testing for classes deriving from LocalizedTestCase

2009-12-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788455#action_12788455
 ] 

Robert Muir commented on LUCENE-2122:
-

thanks Erick, i will play around with the patch some, generally just 
double-check the locale stuff is doing what we want, looks like it will.

i havent tested yet, but looking at the code i have a few questions (i can try 
to add these to the patch just curious what you think):
1. if a test fails under some locale, say th_TH, will junit 4 attempt to print 
this parameter out in some way so I know that it failed? If not do you know of 
a hack?
2. i am thinking about reordering the locale array so that it tests the default 
one first. if you are trying to do some test-driven dev it might be strange if 
the test fails under a different locale first. I think this one is obvious, I 
will play with it to see how it behaves now.


> Use JUnit4 capabilites for more thorough Locale testing for classes deriving 
> from LocalizedTestCase
> ---
>
> Key: LUCENE-2122
> URL: https://issues.apache.org/jira/browse/LUCENE-2122
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Other
>Affects Versions: 3.1
>Reporter: Erick Erickson
>Assignee: Robert Muir
>Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2122-r2.patch, LUCENE-2122-r3.patch, 
> LUCENE-2122-r4.patch, LUCENE-2122.patch
>
>
> Use the @Parameterized capabilities of Junit4 to allow more extensive testing 
> of Locales.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



Re: [jira] Commented: (LUCENE-2122) Use JUnit4 capabilites for more thorough Locale testing for classes deriving from LocalizedTestCase

2009-12-09 Thread Erick Erickson
It's embarrassing that I had to poke around for 1/2 hour to find *code that
I had written recently*. siiiggghhh. Maybe this time it'll stick

In LuceneTestCaseJ4, we added an @Rule-annotated class
InterceptTestCaseEvents whose methods get called whenever an "event"
happens, things like succeeded, failed, started, etc.. The failed method
looks for a method in the failing class called reportAdditionalFailureInfo.
So by adding something like the below to LocalizedTestCase you can print any
information you have available whenever things fail. It gets printed in
addition to the usual information Junit prints. Warning: I tested this
*very* lightly, at least it worked in the one case I tried..

  @Override
  public void reportAdditionalFailureInfo() {
System.out.println("Failing locale is" +
_currentLocale.getDisplayName(_origDefault));
super.reportAdditionalFailureInfo(); // call to super.report.
UNTESTED! and probably not necessary in this context. Left as an exercise
for the reader .
  }

Currently this is only does extra stuff for failed cases, but it would be
trivial to extend for start, end, succeeded whenever there's a need.

Your second question seems quite do-able,just by putting the default locale
in the list before getting into the loop as the first entry. I'm not sure
removing the default language is worth the effort, so it gets run twice. But
if you're writing the code, do whatever you want.

Gotta get some sleep ...

Erick

On Wed, Dec 9, 2009 at 9:45 PM, Robert Muir (JIRA)  wrote:

>
>[
> https://issues.apache.org/jira/browse/LUCENE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788455#action_12788455]
>
> Robert Muir commented on LUCENE-2122:
> -
>
> thanks Erick, i will play around with the patch some, generally just
> double-check the locale stuff is doing what we want, looks like it will.
>
> i havent tested yet, but looking at the code i have a few questions (i can
> try to add these to the patch just curious what you think):
> 1. if a test fails under some locale, say th_TH, will junit 4 attempt to
> print this parameter out in some way so I know that it failed? If not do you
> know of a hack?
> 2. i am thinking about reordering the locale array so that it tests the
> default one first. if you are trying to do some test-driven dev it might be
> strange if the test fails under a different locale first. I think this one
> is obvious, I will play with it to see how it behaves now.
>
>
> > Use JUnit4 capabilites for more thorough Locale testing for classes
> deriving from LocalizedTestCase
> >
> ---
> >
> > Key: LUCENE-2122
> > URL: https://issues.apache.org/jira/browse/LUCENE-2122
> > Project: Lucene - Java
> >  Issue Type: Improvement
> >  Components: Other
> >Affects Versions: 3.1
> >Reporter: Erick Erickson
> >Assignee: Robert Muir
> >Priority: Minor
> > Fix For: 3.1
> >
> > Attachments: LUCENE-2122-r2.patch, LUCENE-2122-r3.patch,
> LUCENE-2122-r4.patch, LUCENE-2122.patch
> >
> >
> > Use the @Parameterized capabilities of Junit4 to allow more extensive
> testing of Locales.
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>
> -
> To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-dev-h...@lucene.apache.org
>
>


Lucene 2.4.1 src .zip issue

2009-12-09 Thread Erik Hatcher
I was doing some research on past releases of Lucene and downloaded  
the archived 2.4.1 src .zip and got this:


~/Downloads: unzip lucene-2.4.1-src.zip
Archive:  lucene-2.4.1-src.zip
  End-of-central-directory signature not found.  Either this file is  
not
  a zipfile, or it constitutes one disk of a multi-part archive.  In  
the
  latter case the central directory and zipfile comment will be found  
on

  the last disk(s) of this archive.
unzip:  cannot find zipfile directory in one of lucene-2.4.1-src.zip or
lucene-2.4.1-src.zip.zip, and cannot find lucene-2.4.1- 
src.zip.ZIP, period.


Yikes!

Anyone else have issues with it?   Or anomalous to my download?

Erik


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2140) TopTermsScoringBooleanQueryRewrite minscore

2009-12-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788547#action_12788547
 ] 

Uwe Schindler commented on LUCENE-2140:
---

I would add this extra seeting to BoostAttribute itsself, because it correlates 
with the retunred boost. This way the attribute is used in two directions. The 
only thing:
- clear() should leave this setting untouched
- equals and hashcode maybe should also ignore this, too
- the default will be Float.NEGATIVE_INFINITY

The code to support this is added into the newest patch of LUCENE-2123 with few 
lines, as it now also did not even try to insert uncompetitive hits into the 
PQ. The TermCollector would be changed from interface to abstract class that 
has a protected final accessor to the boostAttr. But for now, we should wait 
with adding this to BoostAttr.

> TopTermsScoringBooleanQueryRewrite minscore
> ---
>
> Key: LUCENE-2140
> URL: https://issues.apache.org/jira/browse/LUCENE-2140
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Search
>Affects Versions: Flex Branch
>Reporter: Robert Muir
>Priority: Minor
> Fix For: Flex Branch
>
>
> when using the TopTermsScoringBooleanQueryRewrite (LUCENE-2123), it would be 
> nice if MultiTermQuery could set an attribute specifying the minimum required 
> score once the Priority Queue is filled. 
> This way, FilteredTermsEnums could adjust their behavior accordingly based on 
> the minimal score needed to actually be a useful term (i.e. not just pass 
> thru the pq)
> An example is FuzzyTermsEnum: at some point the bottom of the priority queue 
> contains words with edit distance of 1 and enumerating any further terms is 
> simply a waste of time.
> This is because terms are compared by score, then termtext. So in this case 
> FuzzyTermsEnum could simply seek to the exact match, then end.
> This behavior could be also generalized for all n, for a different impl of 
> fuzzyquery where it is only looking in the term dictionary for words within 
> edit distance of n' which is the lowest scoring term in the pq (they adjust 
> their behavior during enumeration of the terms depending upon this attribute).
> Other FilteredTermsEnums could make use of this minimal score in their own 
> way, to drive the most efficient behavior so that they do not waste time 
> enumerating useless terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Attachment: LUCENE-2123.patch
LUCENE-2123-flex.patch

After sleeping one more night, I added code to not even put the termsinto the 
PQ, when they are not competitive. More support for automaton query will come 
only in flex with LUCENE-2140.

I like to commit this during the day. Thanks for all the support and 
interesting discussions.

> Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter 
> fails to highlight FuzzyQuery
> 
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123.patch, 
> LUCENE-2123.patch, LUCENE-2123.patch, LUCENE-2123.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.
> The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
> the code will be refactored to make heavy reuse of term enumeration code and 
> only plug in the PQ for filtering the top terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2123) Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter fails to highlight FuzzyQuery

2009-12-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2123:
--

Lucene Fields: [New, Patch Available]  (was: [New])
Fix Version/s: 3.1

> Move FuzzyQuery rewrite as separate RewriteMode into MTQ, was: Highlighter 
> fails to highlight FuzzyQuery
> 
>
> Key: LUCENE-2123
> URL: https://issues.apache.org/jira/browse/LUCENE-2123
> Project: Lucene - Java
>  Issue Type: Bug
>  Components: contrib/highlighter
>Affects Versions: Flex Branch
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Flex Branch, 3.1
>
> Attachments: LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, 
> LUCENE-2123-flex.patch, LUCENE-2123-flex.patch, LUCENE-2123.patch, 
> LUCENE-2123.patch, LUCENE-2123.patch, LUCENE-2123.patch
>
>
> As FuzzyQuery does not allow to change the rewrite mode, highlighter fails 
> with UOE in flex since LUCENE-2110, because it changes the rewrite mode to 
> Boolean query. The fix is: Allow MTQ to change rewrite method and make 
> FUZZY_REWRITE public for that.
> The rewrite mode will live in MTQ as TOP_TERMS_SCORING_BOOLEAN_REWRITE. Also 
> the code will be refactored to make heavy reuse of term enumeration code and 
> only plug in the PQ for filtering the top terms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org