Or you can just hire Michele to solve all your scaling problems! :)

On Tue, Apr 17, 2012 at 5:40 PM, Michele Comitini <
[email protected]> wrote:

> Richard you are right.  We should make a web2py slice out of this.  I
> keep repeating same things over from time to time, I suppose it's the
> age... ;-)
>
> Anyway a put a star on the thread so one day I will write.
>
> Here are 10 system configuration rules I follow for high scalability
> with web2py on multicore machines:
> 1. use processes not threads or use pypy
> 2. use an event based http frontend
> 3. use UWSGI or SCGI or FCGI
> 4. use HTTP keepalive
> 5. treat static content as static i.e. keep it out of the views
> 6. reduce number and size of static files, i.e. sprites and pack js
> and css and aggregate them in a file
> 7. support gzip encoding for files > 1KB
> 8. put the database(s) on a different machine(s) or double the cores
> i.e. for each request have 2 cores at hand
> 9. cache dynamic content leveraging on web2py cache
> 10. use the dal pooling machinery, i.e. db=DAL(..., pool_size=1)
>
> About the pool_size parameter is very important to reduce response
> time and use an adequate value.
> The above pool_size=1 using processes seems weird, but is not: each
> request is handled by a process that needs only a single connection no
> more,
> using threads pool_size needs a value >= to the maximum number of
> threads in a web2py instance.
> The total number of persistent connections to the db is conn_n =
> proc_num x pool_size (with proc_num = number of processes). So if you
> have 20 processes (proc_num=20) with pool_size=1:
>
> conn_n = 20 x 1 = 20
>
> If you have 20 threads (proc_num = 1) with pool_size=20:
>
> conn_n = 1 x 20 = 20
>
> .
>
> mic
>
> Il 17 aprile 2012 21:00, Richard Vézina <[email protected]>
> ha scritto:
> > Michele,
> >
> > I read this thread, not sure if it is not already existing... But it
> start
> > to look like a recipe for speed tuning that could be translate into a
> web2py
> > slice or blog post :)
> >
> > Richard
> >
> >
> > On Tue, Apr 17, 2012 at 2:32 PM, Michele Comitini
> > <[email protected]> wrote:
> >>
> >> What I suggest is use nginx + uwsgi or nginx + scgi or nginx + fcgi.
> >> You will notice big reduction in resource consuption compared to
> >> Apache.
> >> Apache can be taken away.  By the way you can use any of the above
> >> protocols to do balancing over a pool of machines running web2py and
> >> in front you can put nginx as balancer.
> >>
> >> About caching, what is important is using as much as possible the
> >> *static* dir and have the files underneath served by nginx directly.
> >> You must check that expiration times on static objects are correctly
> >> set.  Also use sprites as much as you can.  You will enjoy a big
> >> improvement because after downloading the objects the first time a
> >> client browser will make only requests for the dynamically generated
> >> contents.  Other objects will be taken by the ondisk cache of the
> >> browser. If  the object is expired  in browser cache, but the content
> >> not changed on the server, the request will result only in a 304
> >> answer so little data is exchanged and little computation is required.
> >>
> >> mic
> >>
> >>
> >> Il 17 aprile 2012 19:10, Bruce Wade <[email protected]> ha scritto:
> >> > Currently I just had 1 server running apache mod_wsgi using the same
> >> > configuration as pyramid. However I just got approved for a few grand
> a
> >> > month to spend on server resources, so I am looking at load balancers.
> >> > And I
> >> > will put nginx in front of apache, and also start using a lot more
> >> > caching.
> >> >
> >> >
> >> > On Tue, Apr 17, 2012 at 5:15 AM, Michele Comitini
> >> > <[email protected]> wrote:
> >> >>
> >> >> One more thing make css and js packed + server side gzipped (nginx
> and
> >> >> cherokee can do also gzip caching)
> >> >>
> >> >> mic
> >> >>
> >> >> Il 17 aprile 2012 14:12, Michele Comitini <
> [email protected]>
> >> >> ha scritto:
> >> >> > If you are on postgreSQL use a process per request setup, you will
> >> >> > have a great benefit.  Use cherokee or nginx (with keepalive
> working)
> >> >> > you will scale smoothly.
> >> >> >
> >> >> > Check that you do as much as possible of a page in a single http
> >> >> > request (i.e. limit ajax load).  Use only one cacheable css and
> limit
> >> >> > the number of scripts or aggregate them in a cacheable file.
> >> >> > Check that everything that is cacheable gets cached indeed (use
> >> >> > firebug or chrome dev tools to find out).
> >> >> >
> >> >> > mic
> >> >> >
> >> >> >
> >> >> > Il 17 aprile 2012 14:07, Michele Comitini
> >> >> > <[email protected]>
> >> >> > ha scritto:
> >> >> >> What is your architecture?  What do you use as frontend http
> server?
> >> >> >> What protocol: SCGI, UWSGI, FCGI...?
> >> >> >> Are you in a thread per request or process per request setup?
> >> >> >>
> >> >> >> mic
> >> >> >>
> >> >> >>
> >> >> >> Il 17 aprile 2012 08:36, Bruce Wade <[email protected]> ha
> >> >> >> scritto:
> >> >> >>> Yes you are correct plus there was 10,000+ requests a second just
> >> >> >>> hitting
> >> >> >>> the site I think I really need a load balanced. We are getting on
> >> >> >>> average
> >> >> >>> 500-1000 new members a day.
> >> >> >>>
> >> >> >>> On Apr 16, 2012 10:59 PM, "pbreit" <[email protected]>
> wrote:
> >> >> >>>>
> >> >> >>>> Don't forget you probably spent quite a bit if time tuning your
> >> >> >>>> Pyramid
> >> >> >>>> app.
> >> >> >>>>
> >> >> >>>> The best ways to scale are:
> >> >> >>>> 1) Cache
> >> >> >>>> 2) Cache
> >> >> >>>> 3) Cache
> >> >> >>>>
> >> >> >>>> Web2py makes caching queries super easy.
> >> >> >>>>
> >> >> >>>> If you are serving a lot of static assets, check out Cloudflare
> >> >> >>>> for
> >> >> >>>> free
> >> >> >>>> CDN.
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > --
> >> > Regards,
> >> > Bruce Wade
> >> > http://ca.linkedin.com/in/brucelwade
> >> > http://www.wadecybertech.com
> >> > http://www.fittraineronline.com - Fitness Personal Trainers Online
> >> > http://www.warplydesigned.com
> >> >
> >
> >
>



-- 

Bruno Rocha
[http://rochacbruno.com.br]

Reply via email to