Wow nice, i think we have a lot to look at guys.
I'll get back to you as soon as we have more metrics to share regarding
this matter.
Basically, we are going to try to add more proxies, since indeed, the
requests are to small (20K not 20MB)

Thanks guys !
-----------
Alejandrito

On Wed, Oct 24, 2012 at 5:49 PM, John Dickinson <m...@not.mn> wrote:

> Smaller requests, of course, will have a higher percentage overhead for
> each request, so you will need more proxies for many small requests than
> the same number of larger requests (all other factors being equal).
>
> If most of the requests are reads, then you probably won't have to worry
> about keystone keeping up.
>
> You may want to look at tuning the object server config variable
> "keep_cache_size". This variable is the maximum size of an object to keep
> in the buffer cache for publicly requested objects. So if you tuned it to
> be 20K (20971520)--by default it is 5424880--you should be able to serve
> most of your requests without needing to do a disk seek, assuming you have
> enough RAM on the object servers. Note that background processes on the
> object servers end up using the cache for storing the filesystem inodes, so
> lots of RAM will be a very good thing in your use case. Of course, the
> usefulness of this caching is dependent on how frequently a given object is
> accessed. You may consider an external caching system (anything from
> varnish or squid to a CDN provider) if the direct public access becomes too
> expensive.
>
> One other factor to consider is that since swift stores 3 replicas of the
> data, there are 3 servers that can serve a request for a given object,
> regardless of how many storage nodes you have. This means that if all 3500
> req/sec are to the same object, only 3 object servers are handling that.
> However, if the 3500 req/sec are spread over many objects, the full cluster
> will be utilized. Some of us have talked about how to improve swift's
> performance for concurrent access to a single object, but those
> improvements have not been coded yet.
>
> --John
>
>
>
> On Oct 24, 2012, at 1:20 PM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> wrote:
>
> > Thanks Josh, and Thanks John.
> > I know it was an exciting Summit! Congrats to everyone !
> >
> > John, let me give you extra data and something that i've already said,
> that might me wrong.
> >
> > First, the request size that will compose the 90.000RPM - 200.000 RPM
> will be from 90% 20K objects, and 10% 150/200K objects.
> > Second, all the "GET" requests, are going to be "public", configured
> through ACL, so, if the GET requests are public (so, no X-Auth-Token is
> passed) why should i be worried about the keystone middleware ?
> >
> > Just to clarify, because i really want to understand what my real
> metrics are so i can know where to tune in case i need to.
> > Thanks !
> >
> > ---
> > Alejandrito
> >
> >
> > On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson <m...@not.mn> wrote:
> > Sorry for the delay. You've got an interesting problem, and we were all
> quite busy last week with the summit.
> >
> > First, the standard caveat: Your performance is going to be highly
> dependent on your particular workload and your particular hardware
> deployment. 3500 req/sec in two different deployments may be very different
> based on the size of the requests, the spread of the data requested, and
> the type of requests. Your experience may vary, etc, etc.
> >
> > However, for an attempt to answer your question...
> >
> > 6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with
> other numbers I've seen from people and what I've seen from other large
> scale deployments. You are basically looking at about 600 req/sec/proxy.
> >
> > My first concern is not the swift workload, but how keystone handles the
> authentication of the tokens. A quick glance at the keystone source seems
> to indicate that keystone's auth_token middleware is using a standard
> memcached module that may not play well with concurrent connections in
> eventlet. Specifically, sockets cannot be reused concurrently by different
> greenthreads. You may find that the token validation in the auth_token
> middleware fails under any sort of load. This would need to be verified by
> your testing or an examination of the memcache module being used. An
> alternative would be to look at the way swift implements it's memcache
> connections in an eventlet-friendly way (see
> swift/common/memcache.py:_get_conns() in the swift codebase).
> >
> > --John
> >
> >
> >
> > On Oct 11, 2012, at 4:28 PM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> wrote:
> >
> > > Hi Stackers !
> > > This is the thing, today we have a 24 datanodes (3 copies, 90TB
> usables) each datanode has 2 intel hexacores CPU with HT and 96GB of RAM,
> and 6 Proxies with the same hardware configuration, using swift 1.4.8 with
> keystone.
> > > Regarding the networking, each proxy / datanodes has a dual 1Gb nic,
> bonded in LACP mode 4, each of the proxies are behind an F5 BigIP Load
> Balancer ( so, no worries over there ).
> > >
> > > Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM
> per Proxies, i know its low, but now ... with a new product migration, soon
> ( really soon ) we are expecting to receive about a total of 90.000 RPM
> average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s )
> to the swift api, witch will be 90% public gets ( no keystone auth ) and
> 10% authorized PUTS (keystone in the middle, worth to know that we have a
> 10 keystone vms pool, connected to a 5 nodes galera mysql cluster, so no
> worries there either )
> > >
> > > So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but
> well, its a number that we cant ignore.
> > > What do you think about this numbers? does this 6 proxies sounds good,
> or we should double or triple the proxies ? Does anyone has this size of
> requests and can share their configs ?
> > >
> > > Thanks a lot, hoping to ear from you guys !
> > >
> > > -----
> > > alejandrito
> > > _______________________________________________
> > > Mailing list: https://launchpad.net/~openstack
> > > Post to     : openstack@lists.launchpad.net
> > > Unsubscribe : https://launchpad.net/~openstack
> > > More help   : https://help.launchpad.net/ListHelp
> >
> >
>
>
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to