Further notes: >> 2.f.: Could use discussion of goals for multiple servers might >> interact, and how to scale the server infrastructure. Will a >> transport be expected to work correctly with content caching >> infrastructures such as squid? What would a high-availability server >> environment look like? Are we relying merely on clustering for HA? > We discussed caching proxies like squid. The discussion is documented at > http://www.opensolaris.org/os/project/caiman/auto_install/ai_design/transport_options.pdf. >
That's a fairly incomplete discussion, as it focuses only on the case of a "slow server", which is not really the most likely problem. More likely is "low-bandwidth or high-latency networks", which will be an environment that the transport needs to operate in independent of any other choices here. Caching proxies are presently an essential element in addressing those environments. > We haven't discussed about high-availability environments. We will have > a discussion and update the spec. >> >> 5.c: Are all transports required to provide secure options? > We looked at the security options that can be supported in the > transport. It should be noted that it is optional. If the transport > support security, we will provide means to enable the security. I will > update the spec to indicate that it is optional and depends on the > transport. >> >> 5.f: There seem to be requirements here to enhance the TFTP service. >> Should discuss with networking team. > That is a good idea. Who is the contact in the networking team? See who the RM is for tftpd bugs and start there. >> A use case for archive-based installation seems worth adding. > You mean replication? I can have a use case bases on existing flash > technology with out any of the implementation details of flash. Yes, I meant replication. >> >> Finally, I'm wondering how this relates to a "serverless" automated >> installation, which is what the Virtual Machine Constructor seems to >> be requiring. Is there a sort of "null" transport that would be used >> there? Just trying to sort out how the architecture is anticipated to >> adapt to that case. > In our use cases, the client ask for the specific data and server > provides the data. In the case self-contained AI, the client will not > use any transport. So I think it should work. Still have to think about > how the client gets the AI manifest. Well, one model is that it doesn't use any transport; another model is that you implement a sort of "null" transport which takes file: URL's or something like that. One nice thing about the null transport model is that you can use it for a simplified testing environment to exercise all the other pieces more completely, and it further tests the generality of what you've done. I'd suggest thinking about that a bit more. Dave