I would believe Facebook does have geographic distribution of their
memcached instances as well as all other app and web servers.

Have you tried running a traceroute to their servers to see if there
is some identifiable bottleneck between you and their servers?

In our instance Facebook is 14 hops away on a fast cable modem on the
US East Coast.

Hops 11-14 balloon in response times to over 100ms average.  If you
get same route or end point hops with high latency this might be the
issue:

11  FACEBOOK-INC.TenGigabitEthernet6-2.ar1.PAO2.gblx.net
(67.17.162.38)  95.380 ms  95.452 ms  110.825 ms
12  ae0.bb01.pao1.tfbnw.net (74.119.76.132)  111.174 ms  111.260 ms  111.108 ms
13  ae5.br02.snc1.tfbnw.net (74.119.76.141)  110.847 ms
ae5.br01.snc1.tfbnw.net (74.119.76.139)  110.763 ms
ae5.br02.snc1.tfbnw.net (74.119.76.141)  110.832 ms
14  eth-17-17.csw01b.snc2.tfbnw.net (204.15.23.197)  116.560 ms
eth-18-1.csw01b.snc2.tfbnw.net (204.15.21.125)  116.446 ms  116.315 ms

On Wed, Jan 6, 2010 at 7:02 PM, Martin Bay <[email protected]> wrote:
> How come sites like facebook does not place memcached servers around
> the world with a live updated copy of their primary memcached servers?
> From Europe the facebook website is EXTREMELY slow.
>

Reply via email to