----- Original Message -----
> From: "Ryan Harper" <ry...@us.ibm.com>
> To: "Dan Yasny" <dya...@redhat.com>
> Cc: "Adam Litke" <a...@us.ibm.com>, "VDSM Project Development" 
> <vdsm-devel@lists.fedorahosted.org>
> Sent: Monday, 15 October, 2012 10:55:21 PM
> Subject: Re: [vdsm] [RFC]about the implement of text-based console
> 
> * Dan Yasny <dya...@redhat.com> [2012-10-15 03:41]:
> > 
> > 
> > > > >>Dan.
> > > > 
> > > > At first glance, the standard sshd on the host is stronger and
> > > > more
> > > > robust than a custom ssh server, but the risk using the host
> > > > sshd
> > > > is
> > > > high. If we implement this feature via host ssd, when a hacker
> > > > attacks the sshd successfully, he will get access to the host
> > > > shell.
> > > > After all, the custom ssh server is not for accessing host
> > > > shell,
> > > > but just for forwarding the data from the guest console (a host
> > > > /dev/pts/X device). If we just use a custom ssh server, the
> > > > code in
> > > > this server only does 1. auth, 2. data forwarding, when the
> > > > hacker
> > > > attacks, he just gets access to that virtual machine. Notice
> > > > that
> > > > there is no code written about login to the host in the custom
> > > > ssh
> > > > server, and the custom ssh server can be protected under
> > > > selinux,
> > > > only allowing it to access /dev/pts/X.
> > > > 
> > > > In fact using a custom VNC server in qemu is as risky as a
> > > > custom
> > > > ssh server in vdsm. If we accepts the former one, then I can
> > > > accepts
> > > > the latter one. The consideration is how robust of the custom
> > > > ssh
> > > > server, and the difficulty to maintain it. In He Jie's current
> > > > patch, the ssh auth and transport library is an open-source
> > > > third-party project, unless the project is well maintained and
> > > > well
> > > > proven, using it can be risky.
> > > > 
> > > > So my opinion is using neither the host sshd, nor a custom ssh
> > > > server. Maybe we can apply the suggestion from Dan Yasny,
> > > > running a
> > > > standard sshd in a very small VM in every host, and forward
> > > > data
> > > > from this VM to other guest consoles. The ssh part is in the
> > > > VM,
> > > > then our work is just forwarding data from the VM via virto
> > > > serial
> > > > channels, to the guest via the pty.
> > > 
> > > I really dislike the idea of a service VM for something as
> > > fundamental as a VM
> > > console.  The logistics of maintaining such a VM are a nightmare:
> > > provisioning,
> > > deployment, software upgrades, HA, etc.
> > 
> > Why? It really sounds like an easy path to me - provisioning of a
> > virtual appliance is supposed to be simple, upgrades not necessary
> > -
> > same as with ovirt-node, just a bit of config files preserved and
> > the
> > rest simply replaced, and HA is taken care of by the platform
> > 
> > On the other hand, maintaining this on multiple hypervisors means
> > they
> > should all be up to date, compliant and configured. Not to mention
> > the
> > security implications of maintaining an extra access point on lots
> > of
> > machines vs a single virtual appliance VM. Bandwidth can be an
> > issue,
> > but I doubt serial console traffic can be that heavy especially
> > since it's there for admin access and not routine work
> 
> So, we're replacing a single daemon with a complete operating system
> ?

a daemon on all hosts vs a single VM. It looks to me like a single access point 
for consoles can provide less of an attack surface. Especially when the virtual 
appliance comes pre-secured.

> which somehow we'll ensure the service VM is connected and running on
> all of the networks between the various clusters and datacenters
> within
> oVirt so that it can provide a single point of failure to the console
> of
> each VM?

Well, I don't see an SPOF here - a VM can be set up HA, right? Moreover, since 
it doesn't really need to be powerful to push text consoles through, you can 
have one per DC or even cluster, if you have too complex a network, a single 
appliance with minimal amount of RAM and a single cpu should not be a problem.

Thinking about it, not every cluster even requires a serial console appliance 
normally, you'd probably use it in clusters of Linux server VMs, but not deploy 
it with VDI and windows VMs I suppose

> 
> > 
> > Am I missing a point here?
> 
> Keep it simple.  We can re-use existing services that are already
> present on all of the hosts:  virsh and ssh for remoting.  By
> re-using
> existing services, there is no additional exposure.

I have always assumed opening ssh on all the hosts is something lots of 
organizations frown upon. I mean I'm all for using sshd if we can't come up 
with something more elegant - it's a known and tested technology, but I have 
seen enough business demands that ssh be closed by default. And remote libvirt 
access is also something to be very careful with

> 
> 
> 
> --
> Ryan Harper
> Software Engineer; Linux Technology Center
> IBM Corp., Austin, Tx
> ry...@us.ibm.com
> 
> 

-- 



Regards, 

Dan Yasny 
Red Hat Israel 
+972 9769 2280
_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to