Edge Cache's exist all over the place to put content nearer the user. As do Peering Agreements so that content can pass over private networks between peered partners. Comcast has these agreements with its users most popular destinations as it makes the end user experience better, and saves them money. Google has a lot of peering agreements as well.
IPv6 adds Multicasting so that you could send data to multiple users in various locations from a single stream. None of these things reduce YOUR bandwidth cost, but GAE has multiple points of presence and GAE benefits from Google Peering agreements. This article is about using GAE's "optimized" bandwidth to off load the stress of images on a web server, and leverage Points of presence to increase user QoS Improve any Website's Performance through Google AppEngine Cloud Computing: http://www.blackwaterops.com/drakaal/services/any-site-even-wordpress-on-goo gle-appengine-we-put-your-site-in-the-cloud/2009/08/ -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of ERans Sent: Sunday, September 27, 2009 10:33 PM To: Google App Engine Subject: [google-appengine] Bandwidth Reduction I was reading about logistics and remembered someone saying that YouTube used more bandwidth than the rest of the Internet. Would it even be possible to optimize the way that data is routed so that you wouldn't have to send the same file to the same place twice? Or does App Engine do this already as part of cloud computing? Like shifting the file storage closer to where it's needed? I'm fairly new to programming and brand-new to cloud computing, but I'm very interested and curious about how it all works! --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en -~----------~----~----~----~------~----~------~--~---
