On Wed, Jul 31, 2013 at 5:54 AM, Rémi Cura <[email protected]> wrote: > Hey, > my 2 cents (sorry I can't access your example) > > Depending on what you want, you may have interest in sampling : > you create a grid of point (there is a function) or a grid of polygon > square. > Then you compute for each point(square) the number of polygon it is in (on > simple sql query invovling count(*) OVER and st_intersects) > Then when you want space where there are between N and M polygons > overlaying, you just query the table with point/square with a WHERE count >N > and count <M. > > It should run very fast, even fater if you put btree index on the count > result. > I don't know what you want, but if this is some kind of indicator, sampling > may be legitimate. > If you want crisp boundary, it may be used to fasten computing (doing the > precise computing only on polygons on border)
I would prefer not to resort to sampling. Some of the boundaries are drawn arbitrarily precisely, and I'd like to preserve that precision. http://regionaldifferences.com/results.html?region=New%20England&lat=42&lon=-73&zoom=6 (now with 708 polygons) _______________________________________________ postgis-users mailing list [email protected] http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users
