Hi John,
Are we wedded to using Apache and CherryPy opposed to one or the
other? I don't see a reason we can't use CherryPy free-standing as it's a
much lighter-weight to configure webserver in my experience. From the last
few days of data, it looks like CherryPy free-standing would simplify our
deployment and be reasonably quick? Perhaps I'm simply missing the
justification for Apache?
Thank you,
Clay
On Fri, 9 Apr 2010, John Fischer wrote:
Dave,
On the performance question. I created 2 different scripts. Both scripts
open a connection to the AI database and get the manifest names from the
database. The mod_wsgi/cherrypy method is clearly faster. Here are the
numbers obtained via 'time wget URL' for 1, 100, 1000 iterations:
apache/mod_wsgi/cherrypy 1
real 0m0.152s
user 0m0.012s
sys 0m0.034s
apache/mod_wsgi/cherrypy 100
real 0m14.515s
user 0m1.364s
sys 0m3.440s
apache/mod_wsgi/cherrypy 1000
real 2m34.528s
user 0m17.528s
sys 0m41.101s
apache/cgi-bin 1
real 0m1.065s
user 0m0.013s
sys 0m0.034s
apache/cgi-bin 100
real 0m57.964s
user 0m1.361s
sys 0m3.648s
apache/cgi-bin 1000
real 9m30.514s
user 0m17.601s
sys 0m43.046s
I did not check to see what simultaneous connections would be
like for each method. Essentially the cgi-bin script is 4 times
slower then the mod_wsgi/cherrypy script.
What is your opinion on the differences? Is this significant enough
to stick with the mod_wsgi/cherrypy solution?
Thanks,
John
On 04/ 6/10 11:36 AM, Dave Miner wrote:
On 04/ 6/10 12:07 PM, John Fischer wrote:
All,
Here is the first draft of the AI Webserver design document.
Thanks to Ethan, Dave and Clay for their help in getting the
ideas, thoughts and design thus far.
Feedback appreciated,
Feedback below. Apologies for not noticing a couple of these issues in
earlier discussion.
Dave
1. Purpose
Could be a bit more verbose about the goals here, which would seem to be
about scalability, reliability and maintainability of the server. There's
nothing said here about the need to improve SMF's ability to detect and
recover failures in the service, for example, but there should be.
2. Assumptions & Dependencies
- I would really expect to see some measurements of the overhead for the
cgi-bin approach before asserting it's insufficient. As I've noted in the
past, the criteria resolution seems to be a very minor piece of the puzzle
in terms of overall boot and installation performance in AI.
- Overall, I'm somewhat less concerned with up-front implementation cost
and more concerned with long-term support cost, both in terms of the
amount/complexity of code and the operational complexity (and thus
reliability) of the solution once deployed. If re-writing is better from
that point of view, then it should be considered rather than being assumed
out of scope.
- Not sure we can make any assumptions about what the apache group would
do. It seems reasonable to assert that any additional modules might be
packaged completely independently from apache (indeed, I'd think that
preferable) and so the dependency seems soft at best.
3. Solution
The overview subsection asserts that mod_python will be used, but then in
Considerations ModWSGI is stated to be the best solution. Components also
asserts mod_python.
Also in Overview, I don't understand why choice of port number affects the
number of properties on the services?
ModProxy is discussed twice in Considerations.
Considerations asserts the cost of cgi without supporting data. We're
using cgi to serve up other, larger hunks of the network boot environment,
so why is it inappropriate here?
A consideration that seems missing is whether & how we would secure the
manifest traffic in a future fully-secure WAN installation environment.
The wanboot infrastructure provides for authentication and encryption,
though we presently do not configure it in AI. However, this is a likely
future enhancement and I'd like to understand how your proposed solution
would integrate when we do.
Contraints - I don't see the need to differentiate between OpenSolaris and
Solaris here.
SMF AI server service - what's the justification for retaining the port
number and log file on each service? Do we need this flexibility?
AI Client - I see some dissonance here. You're proposing a modification to
the client (hence making an incompatible change to the protocol?) but have
bent over backwards on the server to provide compatibility with existing
services. But if we don't back-port the protocol change to the older
clients (which doesn't seem to be proposed), how will that work? If a
back-port is required, then why bother with the server compatibility?
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss