@Les, you make a clear and concise point. thnx.

In this thread, i'm really keen on exploring a theoretical possibility
(that could become very practical for very large installations):

    -- at what node count (for a given pool) may/could we start to
experience problems related to performance  (server, network or even
client) assuming a near perfect hardware/network set-up?
    -- if a memcacached client were to pool say, 2,000 or 20,000
connections (again, theoretical but not entirely impractical given the rate
of internet growth), wud that not inject enough overhead -- connection or
otherwise -- on the client side to, say, warrant a direct fetch from the
database? in such a case, we wud have established a *theoretical* maximum
number nodes in a pool for that given client in near perfect conditions.
    -- also, i wud think the hashing algo wud deteriorate after a given
number of nodes.. admittedly, this number could be very large indeed and
also, i  know this is unlikely in probably 99.999% of cases but it wud be
great to factor in the maths behind science.

Just saying....

-m.

On 26 November 2011 18:28, Les Mikesell <[email protected]> wrote:

> On Sat, Nov 26, 2011 at 7:15 AM, Arjen van der Meijden <[email protected]>
> wrote:
> > Wouldn't more servers become increasingly (seen from the application)
> slower
> > as you force your clients to connect to more servers?
> >
> > Assuming all machines have enough processing power and network bandwidth,
> > I'd expect performance of the last of these variants to be best:
> > 16x  1GB machines
> >  8x  2GB machines
> >  4x  4GB machines
> >  2x  8GB machines
> >  1x 16GB machines
> >
> > In the first one you may end up with 16 different tcp/ip-connections per
> > client. Obviously, connection pooling and proxies can alleviate some of
> that
> > overhead. Still, a multi-get might actually hit all 16 servers.
>
> That doesn't make sense.  Why would you expect 16 servers acting in
> parallel to be slower than a single server?  And in many/most cases
> the application will also be spread over multiple servers so the load
> is distributed independently there as well.
>
> --
>   Les Mikesell
>     [email protected]
>

Reply via email to