Also forwarding George's message. The original thread had a wrong address
for tor-dev, and all their messages are not posted in tor-dev...

George Kargiotakis said:
> On Fri, 20 Dec 2013 11:58:27 -0500
> and...@torproject.org wrote:
>
> > On Fri, Dec 20, 2013 at 03:08:01AM -0800, desnac...@riseup.net wrote
> > 1.7K bytes in 0 lines about: : For this reason we started wondering
> > whether DNS-round-robin-like : scalability is actually worth such
> > trouble. AFAIK most big websites : use DNS round-robin, but is it
> > necessary? What about application-layer : solutions like HAProxy? Do
> > application-layer load balancing solutions : exist for other
> > (stateful) protocols (IRC, XMPP, etc.)?
> >
> > In my experience in running large websites and services, we didn't use
> > DNS round-robin. If large sites do it themselves, versus outsourcing
> > it to a content delivery network, they look into anycast, geoip-based
> > proxy servers, or load balancing proxy servers (3DNS/BigIP,
> > NetScalar, etc) DNS round-robin is for smaller websites which want to
> > simply spread the load across redundant servers--this is what tor
> > does now.
> >
> > If scaling hidden services is going to be a large challenge and
> > consume a lot of time, it sounds like making HS work more reliably
> > and with stronger crypto is a better return on effort. The simple
> > answer for scaling has been to copy around the private/public keys
> > and host the same HS descriptors on multiple machines. I'm not sure
> > we have seen a popular enough hidden service to warrant the need for
> > massive scaling now.
> >
> > Maybe changing HAProxy to support .onion links is a fine option too.
> >
>
> Hello all,
>
> >> For a while we've been told that "hidden services don't scale" and
> >> "there is a max number of clients that a hidden service can handle"
> >> so we decided to also consider hidden service scalability as part of
> >> the upcoming redesign. Unfortunately, we are not experienced in
> >> maintaining busy hidden services so we need some help here.
>
> to solve a problem you need to strictly define it first. Where exactly
> is the bottleneck here ? I've never run a .onion that "couldn't scale"
> because of many clients visiting, so I don't have a first hand
> experience with such issues. If it's because it's slow to open many
> connections to hidden services then imho simply adding an .onion-aware
> HAProxy/varnish won't solve these problems in the long run. There will
> be a time where one HAProxy/varnish won't be enough and it will always
> be a SPOF.
>
> Most big websites do geoip (to distribute the load between DCs in
> different regions), then they do something like HAProxy/LVS to
> spread the load across multiple workers in the same DC, and of course
> they put static files on CDNs.
>
> Each of the above serves quite a different purpose. Reducing latency
> through geoip, load-balancing and graceful fail-over with LVS/HAProxy
> and CDNs are doing both at the same time but for different types of
> requests.
>
> Since geoip does not make sense in the Tor world, maybe making
> multiple hosts advertise the same .onion address at the same time in
> the database would make some sense. If that were true, people could also
> implement .onion CDN services. I'm not so sure what can be done for an
> LVS-like setup in the Tor world though.
>
> I hope this helps a tiny bit.
>
> Regards,
> --
> George Kargiotakis
> https://void.gr
> GPG KeyID: 0x897C03177011E02C


_______________________________________________
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

Reply via email to