Hi If you reload the python interpreter for each CGI request, the JIT has absolutely no chance to warm up. You need some sort of FastCGI solution where the interpreter stays in the memory between requests
On Wed, May 23, 2012 at 10:20 AM, Sasikanth Eda <[email protected]>wrote: > > Hai All, > > I have tried to perform performance testing of a typical CGI-server Rest > based SNIA CDMI specifications and the results are as follows; > > Python PyPy > 16 KB (Object) PUT 196 ms 325 ms > 64 KB (Object) PUT 176 ms 396 ms > 1 MB (Object) PUT 248 ms 927 ms > Container PUT 121 ms 317 ms > > where in each scenario python is faster than PyPy. > > I request your inputs/suggestion techniques towards optimizing such load > loads. > > Kindly share your experiences and feedback in this scenario. > > -- > Thanking you, > Sasikanth > > > > > _______________________________________________ > pypy-dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/pypy-dev > >
_______________________________________________ pypy-dev mailing list [email protected] http://mail.python.org/mailman/listinfo/pypy-dev
