Hi Álvaro, thank you for the Summit, it was great to meet the people
involved in the project. :-)
I did a review of the code last night, and it's awfully
undocumented, but at least it's python :-)
To do a real and useful measurements/integration I think we should
need a pair or even three clean
machines connected with the fastest possible network (gigabit ethernet
or better). and Then, make
some kind of script that detects any new version and triggers the benchmarks.
All the benchmarking process could be enhanced, but I think we at
least have a good starting point
(I know it isn't the most realistic usage situation, but it shows the
server behaviour)
The output of the benchmarks, right now, is built on json data, and
then visualized with an interactive
html+js file. But we could build some .png graphs to share at the
website, or send to the development
list when built.
The output process now is really something like:
easybench.py -(run benchmark against the server
deployment and writes)--> out_xx.py
out_xx.py --(is readen by) ---> result_writer.py
--(writes)---> data.json
test.html --(reads data.json and renders)----> graphs
(note: test.html is attached in this email)
While easybench.py is collecting data, it launches xml-rpc
queries to the server-deployment (check
http://code.google.com/p/easybench/source/browse/trunk/server-deployment/rpcbenchmark.py
)
the server deployment (launched with ./go.py ) automatically
builds every available server at startup (from settings.py) and starts
the rpc-server to listen on:
* start(server_name, config): kills the currently running
server/config and launches the new server_name and config template
* stop(server_name): kills the requested server
* stop_all(): kills all the runing servers
* getmem(server_name): get's server memory usage (don't
know if it's on kB/MB, ...)
* running_server(): returns the name of the currently running server.
* get_servers(): returns a list of available servers to be
run in the server-deployiment.
* get_cpuinfo(): gets /proc/cpuinfo, just for statistical
purposes, not used at this moment
Problems I found:
* Client operating system, or just the client itself could be a
problem: crashing the operating system, exceding limits, or... abperf,
for example doesn't make use of keep alive.
* Server machine sometimes crash with some servers ... (mostly
because of the operating system...), we should find an stable
architecture or at least, a way to reboot the machine, or
maybe it isn't a problem if we're testing only cherokee.
* The VM cpu/tick approach, that could be the cleanest, probably
wouldn't be that realistic, because we're discarding IO times, that
are also a part where servers can be faster (when they produce less
IO).
* if we don't go using an "exactly constrained" VM approach,
then... the day we change the testing machines most tests will change.
(we could live with it..)
Miguel Angel Ajo Pelayo
http://www.nbee.es
+34 636 52 25 69
skype: ajoajoajo
2010/5/10 Alvaro Lopez Ortega <[email protected]>
>
> On 09/05/2010, at 22:24, Miguel Angel wrote:
>
> > Here it's the benchmarking tool I worked on a "few" months ago.
> >
> > http://code.google.com/p/easybench/
> >
> > It could be used to track perfomance changes between differents versions of
> > cherokee, which could be quite useful (it would need something like
> > measuring cpu ticks in a virtual machine, as Stephan sugested).
> >
> > As can be readen in the project page, it's made of two part, the server,
> > and the client (I tested it on standalone machines, and most of my problems
> > came from operating system limits on the client machine -wow-).
> >
> > These are the settings for building/running/testing different servers from
> > sources:
> > http://code.google.com/p/easybench/source/browse/trunk/server-deployment/settings.py
> >
> > And those are the settings for the tested servers:
> > http://code.google.com/p/easybench/source/browse/#svn/trunk/server-deployment/conf_templates/cherokee-0.99.20%3Fstate%3Dclosed
> >
> > It was prepared to get information about requests, failed requests and
> > memory usage on server, but I think the last version (dumping the final
> > results) was lost from my laptop disk crash... , I'll try to check the
> > backups.
>
> Thanks a million of the effort, and for letting us know about it on the
> Cherokee Summit.
>
> I'd love to figure how we could integrate this in some sort of building
> system so Cherokee could be automatically tested for each new release.
>
> --
> Octality
> http://www.octality.com/
>
Title: TEST
Server information
_______________________________________________
Cherokee mailing list
[email protected]
http://lists.octality.com/listinfo/cherokee