On May 27, 7:21 pm, Ben Appleton <[email protected]> wrote: > The v2 encoding scheme did 2 things: > 1 - Compress the coordinates to reduce bandwidth > 2 - Optionally, include precomputed levels of detail, which the JS would use > to render quickly. > > Regarding 1: there are open-source libraries to encode and decode polylines > to reduce bandwidth which you can use. > > Regarding 2: V3 computes these levels in JS, so you do not need to compute > them in your server.
Mmm. We'd been using the bejeebers out of both 1 and 2 at v2. I can either compress the data myself, or enable gzip on JSON (which I think is fraught with peril of its own) to resolve (1). It's slightly irksome, but not hideous. With respect to (2), we'd be been using RDP at the server to optimally encode the polylines / polygons for each zoom level, which had the nice side-effect of throwing out points that wouldn't be seen, even at zoom level 20. While I appreciate that V3 will now do that calculation for me, it would be more efficient if I could pre-compute them *once* on the server side, resulting in less work every time a client asks for the polyline or polygon in question. Is this really closed off forever - Google is never going to allow pre-computed polylines? Herb. -- You received this message because you are subscribed to the Google Groups "Google Maps JavaScript API v3" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-maps-js-api-v3?hl=en.
