* Dan Yasny <dya...@redhat.com> [2012-10-15 03:41]:
> > > >>Dan.
> > > 
> > > At first glance, the standard sshd on the host is stronger and more
> > > robust than a custom ssh server, but the risk using the host sshd
> > > is
> > > high. If we implement this feature via host ssd, when a hacker
> > > attacks the sshd successfully, he will get access to the host
> > > shell.
> > > After all, the custom ssh server is not for accessing host shell,
> > > but just for forwarding the data from the guest console (a host
> > > /dev/pts/X device). If we just use a custom ssh server, the code in
> > > this server only does 1. auth, 2. data forwarding, when the hacker
> > > attacks, he just gets access to that virtual machine. Notice that
> > > there is no code written about login to the host in the custom ssh
> > > server, and the custom ssh server can be protected under selinux,
> > > only allowing it to access /dev/pts/X.
> > > 
> > > In fact using a custom VNC server in qemu is as risky as a custom
> > > ssh server in vdsm. If we accepts the former one, then I can
> > > accepts
> > > the latter one. The consideration is how robust of the custom ssh
> > > server, and the difficulty to maintain it. In He Jie's current
> > > patch, the ssh auth and transport library is an open-source
> > > third-party project, unless the project is well maintained and well
> > > proven, using it can be risky.
> > > 
> > > So my opinion is using neither the host sshd, nor a custom ssh
> > > server. Maybe we can apply the suggestion from Dan Yasny, running a
> > > standard sshd in a very small VM in every host, and forward data
> > > from this VM to other guest consoles. The ssh part is in the VM,
> > > then our work is just forwarding data from the VM via virto serial
> > > channels, to the guest via the pty.
> > 
> > I really dislike the idea of a service VM for something as
> > fundamental as a VM
> > console.  The logistics of maintaining such a VM are a nightmare:
> > provisioning,
> > deployment, software upgrades, HA, etc.
> Why? It really sounds like an easy path to me - provisioning of a
> virtual appliance is supposed to be simple, upgrades not necessary -
> same as with ovirt-node, just a bit of config files preserved and the
> rest simply replaced, and HA is taken care of by the platform
> On the other hand, maintaining this on multiple hypervisors means they
> should all be up to date, compliant and configured. Not to mention the
> security implications of maintaining an extra access point on lots of
> machines vs a single virtual appliance VM. Bandwidth can be an issue,
> but I doubt serial console traffic can be that heavy especially
> since it's there for admin access and not routine work

So, we're replacing a single daemon with a complete operating system ?
which somehow we'll ensure the service VM is connected and running on
all of the networks between the various clusters and datacenters within
oVirt so that it can provide a single point of failure to the console of
each VM?

> Am I missing a point here? 

Keep it simple.  We can re-use existing services that are already
present on all of the hosts:  virsh and ssh for remoting.  By re-using
existing services, there is no additional exposure.

Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx

vdsm-devel mailing list

Reply via email to