Edward Pilatowicz wrote: > >> This project is basically proposing to putback the current open source >> TSS stack (TrouSerS v0.3.1) and the above design comes from TrouSerS >> not from us. TrouSerS is currently being used on many Linux >> platforms that have TPM devices, all of which have the same >> configuration model. >> >> Deviating from the implementation in the ways you are proposing may be >> possible, but I would prefer to take that on as an RFE for later and work >> with >> the community to implement it in the original project code and not have to >> apply unique feature patches in our version. >> >> Presumably, this would involve adding new configurations that would allow >> for access to the TCS from a zone on the same system, but not from an >> external host. >> >> Our first priority is to get TrouSerS into Solaris. Improving it and making >> it take advantage of more of the unique Solaris features is more of a long >> term goal that I would be happy to work on once we have the first phase in >> place. >> >> > > well, there could be multiple implementation options. > > one option (which you seem to refer to above) would be to allow zones to > communicate to a tss daemon in the global zone via some other > non-network based transport (like say doors). as you mention, this > would require modifications to community code. > > another option (and the first one that i'd recommend investigating) > would be to virtualize the TPM device driver. it sounds like your > implementing the driver component yourself (instead of delivering > community developed driver code). if this is the case then virtualizing > at this layer wouldn't involve any deviation from the community code. >
It's true that the TPM driver is not from open source or from an external community. However, virtualization of the TPM is a concern of the TCG and is being addressed by a working group within the TCG. I don't think there has been a final spec delivered as of yet that addresses virtualization. In the interest of getting basic TPM support into Solaris (an area in which we are already way behind Microsoft and Linux), I would like to go forward with our current plan and address virtualization later. >>> to me it seems to me that each zone should have it's own copy of TSS >>> daemon with it's own tcsd.conf configuration file. It should be the >>> responsibility of the TPM device driver to manage requests from multiple >>> zones, with any zone device specific configuration being managed via >>> zonecfg. (that said, i don't have a complete understanding of the >>> TPM/TSS device interfaces, so there may be other better ways to >>> virtualize this subsystem to work with zones, and hence i'd be more than >>> willing to discuss options to improve the zones integration of this >>> feature.) >>> >> Allowing for multiple TCS daemons in multiple zones would violate the spec >> and would >> not be acceptable to the upstream development community for TrouSerS. >> The TSS specification specifically states that the TPM should >> only handle accept connection at a time. The TSS daemon (tcsd) is also >> part of the specification and it is responsible for handling all direct >> access to the TPM. >> > > well, i wasn't suggesting that you'd want to support multiple TCS > daemons in a single zone, just one in each zone. this is why i would > initially recommended investigating virtualization at the device layer, > since presumably if done correctly, each zone would believe that it has > exclusive access to the TPM device. > > could you point me to the TPM device driver interfaces that will be > delivered? (i'd be interested in looking at them to determine the > difficulty of virtualizing them.) > Do you want pointer to the source code or the specs? >> ... There are specifications for Virtualized TPM devices, >> but those are not yet implemented by our driver or the TSS libraries. I do >> believe working on the virtual TPM support is something we do plan to >> investigate >> in the future, though. >> >> > > it's good to hear that the community is working on a Virtualized TPM > devices. are you at all involved in this process? i ask because zones > virtualization is different than other system/platform types of > virtualization, so there's no guarantee that their virtualization > approach will be applicable to and/or leveragable by zones. > I am active in the TCG but have not been following the virtualization working groups. Perhaps once we have basic TPM support, I will start following the virtualization WG and find out what direction they are headed in. > i know that you just want to deliver the community tpm/tss code into > solaris, but i'd like to know what the long term architecture is for > supporting this functionality with zones (even if you plan to deliver > that functionality in a subsequent putback). looking at this from a > zones perspective, ideally, we'd like non-global zones to look exactly > like the global zone. anything which can be done in the global zone > should be do-able in the non-global zone in the same manner as in the > global zone (assuming that the zone has been given access to any > required physical resources necessary to do the requested operation). > > with that in mind, at a minimum level, couldn't you deliver the userland > stack (daemon and libraries) into each zone, and then if an admin wants > to deploy a tss based application in a zone they could simply disable > the tss daemon in the global zone, add /dev/tpm (or whatever) into the > zonecfg device spec, and enable the tss daemon within that zone. this > would allow any one zone on the system to use tpm at a time and would at > least address the configuration containment and network configuration > limitations in the current proposal. > That might work. Obviously, there is no problem delivering the daemon and libraries in all zones. As long as there is only 1 instance of the TPM device in the kernel, and one reader/write of that device in userland (across all zones), I see no problem having the tpm device be located in whatever zone the admin prefers to install it. -Wyllys
