On Thu, Nov 27, 2008 at 02:21:27PM -0500, Wyllys Ingersoll wrote: > Edward Pilatowicz wrote: >> i have some concerns about the zones integration proposed by this case. >> >> in general, we recommend that applications be deployed within zones >> since that provides an easy way to contain application configuration. >> with this proposal, if an administrator deploys an application in a zone >> which tries to use the TSS library, then they have to modify the >> tcsd.conf file in the global zone. this breaks up the application >> configuration across multiple zones and impacts zone migration, there by >> eliminating the application encapsulation benefits that zones provides. >> >> this proposal also assumes that any zones which want to use the TSS >> library functionality will have network access to the global zone. this >> is an invalid assumption. exclusive ip stack zones allow for zones with >> network configurations which are compleatly independent of the global >> zone. depending on the deployment, the global zone may not even be >> accessible via the network. >> >> i also don't see any discussion of security issues wrt allowing remote >> access to the TSS daemon. once an admin decides to enable remote host >> access via tcsd.conf, how is authentication and authorisation of remote >> clients handled? > > By default, network access is disabled. Authentication of sensitive > commands > is defined in the TSS specifications, the remote application would have to > know the > correct passphrases (which are encrypted) to be able to perform sensitive > operations. > The authentication and authorization is handled by the TCS Daemon according > to the specifications and the settings in the tcsd.conf file. >
ah. ok. i looked briefly through the TPM spec to try and determine the device/daemon interfaces, but unfortunately there's a whole lot of spec out there that this case covers. ;) good to hear this functionality is present. (although having authentication still doesn't address the portability and network configuration limitations of the current implementation.) >> looking at the tcsd.conf file, it seems that a zone administrator might >> want to tune these configuration parameters locally within a zone >> depending on the behaviour of the TSS library consumers running within >> the zone. that's not possible with this proposal. > This project is basically proposing to putback the current open source > TSS stack (TrouSerS v0.3.1) and the above design comes from TrouSerS > not from us. TrouSerS is currently being used on many Linux > platforms that have TPM devices, all of which have the same > configuration model. > > Deviating from the implementation in the ways you are proposing may be > possible, but I would prefer to take that on as an RFE for later and work > with > the community to implement it in the original project code and not have to > apply unique feature patches in our version. > > Presumably, this would involve adding new configurations that would allow > for access to the TCS from a zone on the same system, but not from an > external host. > > Our first priority is to get TrouSerS into Solaris. Improving it and making > it take advantage of more of the unique Solaris features is more of a long > term goal that I would be happy to work on once we have the first phase in > place. > well, there could be multiple implementation options. one option (which you seem to refer to above) would be to allow zones to communicate to a tss daemon in the global zone via some other non-network based transport (like say doors). as you mention, this would require modifications to community code. another option (and the first one that i'd recommend investigating) would be to virtualize the TPM device driver. it sounds like your implementing the driver component yourself (instead of delivering community developed driver code). if this is the case then virtualizing at this layer wouldn't involve any deviation from the community code. >> to me it seems to me that each zone should have it's own copy of TSS >> daemon with it's own tcsd.conf configuration file. It should be the >> responsibility of the TPM device driver to manage requests from multiple >> zones, with any zone device specific configuration being managed via >> zonecfg. (that said, i don't have a complete understanding of the >> TPM/TSS device interfaces, so there may be other better ways to >> virtualize this subsystem to work with zones, and hence i'd be more than >> willing to discuss options to improve the zones integration of this >> feature.) > Allowing for multiple TCS daemons in multiple zones would violate the spec > and would > not be acceptable to the upstream development community for TrouSerS. > The TSS specification specifically states that the TPM should > only handle accept connection at a time. The TSS daemon (tcsd) is also > part of the specification and it is responsible for handling all direct > access to the TPM. well, i wasn't suggesting that you'd want to support multiple TCS daemons in a single zone, just one in each zone. this is why i would initially recommended investigating virtualization at the device layer, since presumably if done correctly, each zone would believe that it has exclusive access to the TPM device. could you point me to the TPM device driver interfaces that will be delivered? (i'd be interested in looking at them to determine the difficulty of virtualizing them.) > ... There are specifications for Virtualized TPM devices, > but those are not yet implemented by our driver or the TSS libraries. I do > believe working on the virtual TPM support is something we do plan to > investigate > in the future, though. > it's good to hear that the community is working on a Virtualized TPM devices. are you at all involved in this process? i ask because zones virtualization is different than other system/platform types of virtualization, so there's no guarantee that their virtualization approach will be applicable to and/or leveragable by zones. i know that you just want to deliver the community tpm/tss code into solaris, but i'd like to know what the long term architecture is for supporting this functionality with zones (even if you plan to deliver that functionality in a subsequent putback). looking at this from a zones perspective, ideally, we'd like non-global zones to look exactly like the global zone. anything which can be done in the global zone should be do-able in the non-global zone in the same manner as in the global zone (assuming that the zone has been given access to any required physical resources necessary to do the requested operation). with that in mind, at a minimum level, couldn't you deliver the userland stack (daemon and libraries) into each zone, and then if an admin wants to deploy a tss based application in a zone they could simply disable the tss daemon in the global zone, add /dev/tpm (or whatever) into the zonecfg device spec, and enable the tss daemon within that zone. this would allow any one zone on the system to use tpm at a time and would at least address the configuration containment and network configuration limitations in the current proposal. ed
