On 03/01/2017 09:17 AM, Marc-André Lureau wrote:
Hi
On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <stef...@us.ibm.com
<mailto:stef...@us.ibm.com>> wrote:
"Daniel P. Berrange" <berra...@redhat.com
<mailto:berra...@redhat.com>> wrote on 03/01/2017 07:54:14
AM:
>
> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com
<mailto:stef...@linux.vnet.ibm.com>) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > <snip>
> > > >
> > > > > > So what was the multi-instance vTPM proxy driver patch set
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the
multi-instance
> > > > proxy works; my brief reading when I saw your patch series
seemed
> > > > to suggest it could be used instead of CUSE for the
non-container
case.
> > > One of the key things that was/is not appealing about this CUSE
approach
> > > is that it basically invents a new ioctl() mechanism for
talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU probably
doesn't
need
> > > to have any changes at all - its existing driver for talking
to TPM
> >
> > We still need the control channel with the vTPM to reset it
upon VM
reset,
> > for getting and setting the state of the vTPM upon
snapshot/suspend/resume,
> > changing locality, etc.
>
> You ultimately need the same mechanisms if using in-kernel vTPM with
> containers as containers can support snapshot/suspend/resume/etc
too.
The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the same
control
channel through sockets. So on that level we would have support
for the
operations but not integrated with anything that would support
container
migration.
Ah that might explain why you added the socket control channel, but
there is no user yet? (or some private product perhaps). Could you
tell if control and data channels need to be synchronized in any ways?
In the general case, synchronization would have to happen, yes. So a
lock that is held while the TPM processes data would have to lock out
control channel commands that operate on the TPM data. That may be
missing. In case of QEMU being the client, not much concurrency would be
expected there just by the way QEMU interacts with it.
A detail: A corner case is live-migration with the TPM emulation being
busy processing a command, like creation of a key. In that case QEMU
would keep on running and only start streaming device state to the
recipient side after the TPM command processing finishes and has
returned the result. QEMU wouldn't want to get stuck in a lock between
data and control channel, so would have other means of determining when
the backend processing is done.
Getting back to the original out-of-process design: qemu links with
many libraries already, perhaps a less controversial approach would be
to have a linked in solution before proposing out-of-process? This
would be easier to deal with for
I had already proposed a linked-in version before I went to the
out-of-process design. Anthony's concerns back then were related to the
code not being trusted and a segfault in the code could bring down all
of QEMU. That we have test suites running over it didn't work as an
argument. Some of the test suite are private, though.
management layers etc. This wouldn't be the most robust solution, but
could get us somewhere at least for easier testing and development.
Hm. In terms of external process it's basically 'there', so I don't
related to the 'easier testing and development.' The various versions
with QEMU + CUSE TPM driver patches applied are here.
https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm
I have an older version of libvirt that has the necessary patches
applied to start QEMU with the external TPM. There's also virt-manager
support.
If CUSE is the wrong interface, then there's a discussion about this
here. Alternatively UnixIO for data and control channel could be used.
https://github.com/stefanberger/swtpm/issues/4
Stefan
thanks
--
Marc-André Lureau