The aspect that I raised in some of my comments that needs to be addressed is how we would handle a secure network boot + install. The wanboot implementation we are leveraging that is based on Apache supported this for S10, we anticipate needing to meet that requirement with AI as well.

Dave

On 04/12/10 05:05 PM, John Fischer wrote:
Clay,

Good question.  We are not wedded to using Apache and CherryPy.  In fact
I think
that it would be good to consolidate on a single webserver be it Apache
or CherryPy.

Last time we talked about configuring Apache to allow CherryPy as a back
end I
thought that we talked that CherryPy alone would cover 80% of our
customer base.
Is that true?  In fact if I recall correctly you were going to blog how
to get CherryPy
as a back end to Apache for those that needed a larger deployment.

The other issue that I see with the CherryPy only solution is that we,
the install team,
must maintain and support it whereas this is not true of Apache.
Although the
company as a whole would still support Apache those resources are
already available
and committed.

Finally, there are far more administrators who are familiar with Apache
then with
CherryPy.  This should allow our customer base the ability to solve many
of their
own issues without calling for help.

CherryPy does have the enough performance for us but then again so does
Apache.

Additional thoughts?

John


On 04/12/10 11:17 AM, [email protected] wrote:
Hi John,
     Are we wedded to using Apache and CherryPy opposed to one or the
other? I don't see a reason we can't use CherryPy free-standing as
it's a much lighter-weight to configure webserver in my experience.
 From the last few days of data, it looks like CherryPy free-standing
would simplify our deployment and be reasonably quick? Perhaps I'm
simply missing the justification for Apache?
                             Thank you,
                             Clay

On Fri, 9 Apr 2010, John Fischer wrote:

Dave,

On the performance question.  I created 2 different scripts.  Both
scripts
open a connection to the AI database and get the manifest names from the
database.  The mod_wsgi/cherrypy method is clearly faster.  Here are the
numbers obtained via 'time wget URL' for 1, 100, 1000 iterations:

    apache/mod_wsgi/cherrypy 1
    real    0m0.152s
    user    0m0.012s
    sys    0m0.034s

    apache/mod_wsgi/cherrypy 100
    real    0m14.515s
    user    0m1.364s
    sys    0m3.440s

    apache/mod_wsgi/cherrypy 1000
    real    2m34.528s
    user    0m17.528s
    sys    0m41.101s

    apache/cgi-bin 1
    real    0m1.065s
    user    0m0.013s
    sys    0m0.034s

    apache/cgi-bin 100
    real    0m57.964s
    user    0m1.361s
    sys    0m3.648s

    apache/cgi-bin 1000
    real    9m30.514s
    user    0m17.601s
    sys    0m43.046s

I did not check to see what simultaneous connections would be
like for each method.  Essentially the cgi-bin script is 4 times
slower then the mod_wsgi/cherrypy script.

What is your opinion on the differences?  Is this significant enough
to stick with the mod_wsgi/cherrypy solution?

Thanks,

John

On 04/ 6/10 11:36 AM, Dave Miner wrote:
On 04/ 6/10 12:07 PM, John Fischer wrote:
All,

Here is the first draft of the AI Webserver design document.
Thanks to Ethan, Dave and Clay for their help in getting the
ideas, thoughts and design thus far.

Feedback appreciated,


Feedback below.  Apologies for not noticing a couple of these issues
in earlier discussion.

Dave

1. Purpose
Could be a bit more verbose about the goals here, which would seem
to be about scalability, reliability and maintainability of the
server. There's nothing said here about the need to improve SMF's
ability to detect and recover failures in the service, for example,
but there should be.

2. Assumptions&  Dependencies
- I would really expect to see some measurements of the overhead for
the cgi-bin approach before asserting it's insufficient.  As I've
noted in the past, the criteria resolution seems to be a very minor
piece of the puzzle in terms of overall boot and installation
performance in AI.

- Overall, I'm somewhat less concerned with up-front implementation
cost and more concerned with long-term support cost, both in terms
of the amount/complexity of code and the operational complexity (and
thus reliability) of the solution once deployed.  If re-writing is
better from that point of view, then it should be considered rather
than being assumed out of scope.

- Not sure we can make any assumptions about what the apache group
would do.  It seems reasonable to assert that any additional modules
might be packaged completely independently from apache (indeed, I'd
think that preferable) and so the dependency seems soft at best.

3. Solution

The overview subsection asserts that mod_python will be used, but
then in Considerations ModWSGI is stated to be the best solution.
Components also asserts mod_python.

Also in Overview, I don't understand why choice of port number
affects the number of properties on the services?

ModProxy is discussed twice in Considerations.

Considerations asserts the cost of cgi without supporting data.
We're using cgi to serve up other, larger hunks of the network boot
environment, so why is it inappropriate here?

A consideration that seems missing is whether&  how we would secure
the manifest traffic in a future fully-secure WAN installation
environment. The wanboot infrastructure provides for authentication
and encryption, though we presently do not configure it in AI.
However, this is a likely future enhancement and I'd like to
understand how your proposed solution would integrate when we do.

Contraints - I don't see the need to differentiate between
OpenSolaris and Solaris here.

SMF AI server service - what's the justification for retaining the
port number and log file on each service?  Do we need this flexibility?

AI Client - I see some dissonance here.  You're proposing a
modification to the client (hence making an incompatible change to
the protocol?) but have bent over backwards on the server to provide
compatibility with existing services.  But if we don't back-port the
protocol change to the older clients (which doesn't seem to be
proposed), how will that work? If a back-port is required, then why
bother with the server compatibility?

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss



_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to