Hi,

First off, can I say how much I love GiST? It's already solved a few problems for me that seemed impossible to solve in real-time queries. Thanks to everyone who works on that project!

I'm developing a geographic index based on a set of zip code boundaries. Points of interest (POI) will fall within some boundaries and not others. I need to search to find which POI are within a specified boundary.

I think have two options (see below) and I'm wondering if anyone has an opinion or experience as to whether one or the other will have substantially different performance characteristics. I can obviously test when I get that far, but I'd prefer to try the anticipated faster route first, if anyone has existing experience they can share:

1) Index a series of circles of NN radius around each boundary marker (lat/long point). Run a search on POI for those that fall within any of the specified circles.

2) Index a set of polygons that mark the "minimum area" around the boundary markers in question. Run a search on POI that fall within this single polygon.

The polygon will have more points, but there will be more circles to search - my understanding of GiST is limited so I'm not sure if there's a performance benefit to searching many circles or a few polygons.

My tables are of this size:

# of POI: 50,000
# of zip blocks (with and without regions): 217,000
# of zip blocks in a given city (and hence in a given polygon): ~5

Any thoughts or ideas?

Thank you,

Steve

p.s. I could use a GIS system alongside of Postgres but performance and efficiency are key to this system, and it seems to me that raw GiST indexed SQL queries are going to be fastest and create the lowest load on the server?


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to