[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12920536#action_12920536
 ] 

David Smiley commented on SOLR-2155:
------------------------------------

Yes, absolutely Rob.  I went with geohashes because it was a straight-forward 
implementation to prove out the concept. It appears my patch is the first of 
its kind for Lucene/Solr.  For doing a more efficient Morton representation, I 
have already looked at the work going on at javageomodel:  
http://code.google.com/p/javageomodel/  which was built for use with Google 
BigTable.  The code there is largely pure java, keep in mind.  It's the same 
concept but it uses a dictionary of size 16 (representable by 4 bits) which 
results in cleaner algorithms than geohashes' 5-bit dictionary which has some 
even/odd rules to it which are awkward.  But yes, it would be more efficient to 
store the actual intended bits, not characters.

One area that I know nothing about is how scoring/sorting actually works within 
Lucene.  For the work here I wasn't in need of that but many people clearly 
want that.  In your opinion Rob, is there any opportunity for geo 
sorting/relevancy code to take advantage of any efficiencies done here or are 
they completely unrelated?

(I meant to track you down at LuceneRevolution to say hi but I missed the 
opportunity.)

> Geospatial search using geohash prefixes
> ----------------------------------------
>
>                 Key: SOLR-2155
>                 URL: https://issues.apache.org/jira/browse/SOLR-2155
>             Project: Solr
>          Issue Type: Improvement
>            Reporter: David Smiley
>         Attachments: GeoHashPrefixFilter.patch
>
>
> There currently isn't a solution in Solr for doing geospatial filtering on 
> documents that have a variable number of points.  This scenario occurs when 
> there is location extraction (i.e. via a "gazateer") occurring on free text.  
> None, one, or many geospatial locations might be extracted from any given 
> document and users want to limit their search results to those occurring in a 
> user-specified area.
> I've implemented this by furthering the GeoHash based work in Lucene/Solr 
> with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
> earth.  Each successive character added further subdivides the box into a 4x8 
> (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
> step in this scheme is figuring out which geohash grid squares cover the 
> user's search query.  I've added various extra methods to GeoHashUtils (and 
> added tests) to assist in this purpose.  The next step is an actual Lucene 
> Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
> TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
> matching geohash grid is found, the points therein are compared against the 
> user's query to see if it matches.  I created an abstraction GeoShape 
> extended by subclasses named PointDistance... and CartesianBox.... to support 
> different queried shapes so that the filter need not care about these details.
> This work was presented at LuceneRevolution in Boston on October 8th.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to