On Dec 25, 9:41 am, bratliff <[email protected]> wrote:
> Improvements have been made to standard GPolys since
> "fromEncoded" GPolys were introduced.  The API does perform
> basic thinning of adjacent points.  The ability of GPoly to
> deal with large polys is different today than it was a year
> ago.

I'm sure that the API has improved in many respects over the
last year, but what evidence do you have that standard
GPolys perform as well as encode GPolys for complicated
geometries? The following two pages use the current API to
display the same polyline with about 9700 points; one is
encoded, the other is standard and the encoded polyline
performance is much better:
http://facstaff.unca.edu/mcmcclur/GoogleMaps/EncodePolyline/proper.html
http://facstaff.unca.edu/mcmcclur/GoogleMaps/EncodePolyline/normal.html

> Zoom strings are a basically flawed idea. ... Doing it in
> the API reduces the chance of mistakes.  Doing it in the API
> enables averaging between points.  Doing it in the API also
> could benefit "on-the-fly" polys.

Doing it in the API also takes time.  Compare now the
performance of the following two maps:
http://facstaff.unca.edu/mcmcclur/GoogleMaps/EncodePolyline/proper.html
http://facstaff.unca.edu/mcmcclur/GoogleMaps/EncodePolyline/exampleGGeoXML.html

In this second example, the encoding is performed by GGeoXml.
As a result, there is a bit of delay in the loading of the
file.  Now, it's not much, but an example with say 50,000 or
500,000 points would show a much bigger delay.

> Douglas-Peucker for zoom strings is overkill.

I chose to adapted the Douglas-Peucker algorithm to polyline
encoding for three reasons:
  1) It's the industry standard.  ArcGIS, for example, has
  this algorithm for polyline simplification.
  2) There is a very simple geometric correspondence between
  the DP algorithm and the encoding process.
  3) It's relatively fast.  I mean this in a quite
  quantitative sense; it's expected time complexity is
  O(n*log(n)).  My first version of the polyline encoder was
  based on the simpler and faster vertex reduction algorithm
  (O(n)), but the result was not of equal quality.

> Douglas-Peucker was designed for polylines not for polygons.

This is just not true; there's no reason you can't apply DP
to a polyline whose first and last coordinates are the same.

> If applied to Lat/Lon coordinates rather than to pixel
> coordinates, it will produce biased results.

This is simply an issue involving map projection and arises
in many similar applications.  There's no reason you can't
project first and then apply DP.  You could also take John
and Marcelo's approach and use spherical distance rather
than Cartesian.  Of course, either of these will slow things
down.


At any rate, polyline encoding still seems to be worthwhile
in some circumstances.

Mark McClure

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google Maps API" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/Google-Maps-API?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to