----- Original Message -----
> From: "Adam Litke" <a...@us.ibm.com>
> To: "Dan Yasny" <dya...@redhat.com>
> Cc: "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org>, "Zhou 
> Zheng Sheng" <zhshz...@linux.vnet.ibm.com>
> Sent: Monday, 15 October, 2012 3:07:47 PM
> Subject: Re: [vdsm] [RFC]about the implement of text-based console
> 
> On Mon, Oct 15, 2012 at 04:40:00AM -0400, Dan Yasny wrote:
> > 
> > 
> > ----- Original Message -----
> > > From: "Adam Litke" <a...@us.ibm.com>
> > > To: "Zhou Zheng Sheng" <zhshz...@linux.vnet.ibm.com>
> > > Cc: "VDSM Project Development"
> > > <vdsm-devel@lists.fedorahosted.org>
> > > Sent: Friday, 12 October, 2012 3:10:57 PM
> > > Subject: Re: [vdsm] [RFC]about the implement of text-based
> > > console
> > > 
> > > On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
> > > > 
> > > > on 09/04/2012 22:19, Ryan Harper wrote:
> > > > >* Dan Kenigsberg <dan...@redhat.com> [2012-09-04 05:53]:
> > > > >>On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
> > > > >>>On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
> > > > >>>>On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
> > > > >>>>>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
> > > > >>>>>>Hi,
> > > > >>>>>>
> > > > >>>>>>   I submited a patch for text-based console
> > > > >>>>>>http://gerrit.ovirt.org/#/c/7165/
> > > > >>>>>>
> > > > >>>>>>the issue I want to discussing as below:
> > > > >>>>>>1. fix port VS dynamic port
> > > > >>>>>>
> > > > >>>>>>Use fix port for all VM's console. connect console with
> > > > >>>>>>'ssh
> > > > >>>>>>vmUUID@ip -p port'.
> > > > >>>>>>Distinguishing VM by vmUUID.
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>   The current implement was vdsm will allocated port for
> > > > >>>>>>   console
> > > > >>>>>>dynamically and spawn sub-process when VM creating.
> > > > >>>>>>In sub-process the main thread responsible for accept new
> > > > >>>>>>connection
> > > > >>>>>>and dispatch output of console to each connection.
> > > > >>>>>>When new connection is coming, main processing create new
> > > > >>>>>>thread for
> > > > >>>>>>each new connection. Dynamic port will allocated
> > > > >>>>>>port for each VM and use range port. It isn't good for
> > > > >>>>>>firewall rules.
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>   so I got a suggestion that use fix port. and connect
> > > > >>>>>>   console with
> > > > >>>>>>'ssh vmuuid@hostip -p fixport'. this is simple for user.
> > > > >>>>>>We need one process for accept new connection from fix
> > > > >>>>>>port
> > > > >>>>>>and when
> > > > >>>>>>new connection is coming, spawn sub-process for each vm.
> > > > >>>>>>But because the console only can open by one process,
> > > > >>>>>>main
> > > > >>>>>>process
> > > > >>>>>>need responsible for dispatching console's output of all
> > > > >>>>>>vms
> > > > >>>>>>and all
> > > > >>>>>>connection.
> > > > >>>>>>So the code will be a little complex then dynamic port.
> > > > >>>>>>
> > > > >>>>>>   So this is dynamic port VS fix port and simple code VS
> > > > >>>>>>   complex code.
> > > > >>>>>>From a usability point of view, I think the fixed port
> > > > >>>>>>suggestion is nicer.
> > > > >>>>>This means that a system administrator needs only to open
> > > > >>>>>one
> > > > >>>>>port to enable
> > > > >>>>>remote console access.  If your initial implementation
> > > > >>>>>limits
> > > > >>>>>console access to
> > > > >>>>>one connection per VM would that simplify the code?
> > > > >>>>Yes, using a fixed port for all consoles of all VMs seems
> > > > >>>>like
> > > > >>>>a cooler
> > > > >>>>idea. Besides the firewall issue, there's user experience:
> > > > >>>>instead of
> > > > >>>>calling getVmStats to tell the vm port, and then use ssh,
> > > > >>>>only
> > > > >>>>one ssh
> > > > >>>>call is needed. (Taking this one step further - it would
> > > > >>>>make
> > > > >>>>sense to
> > > > >>>>add another layer on top, directing console clients to the
> > > > >>>>specific host
> > > > >>>>currently running the Vm.)
> > > > >>>>
> > > > >>>>I did not take a close look at your implementation, and did
> > > > >>>>not
> > > > >>>>research
> > > > >>>>this myself, but have you considered using sshd for this? I
> > > > >>>>suppose you
> > > > >>>>can configure sshd to collect the list of known "users"
> > > > >>>>from
> > > > >>>>`getAllVmStats`, and force it to run a command that
> > > > >>>>redirects
> > > > >>>>VM's
> > > > >>>>console to the ssh client. It has a potential of being a
> > > > >>>>more
> > > > >>>>robust
> > > > >>>>implementation.
> > > > >>>I have considered using sshd and ssh tunnel. They
> > > > >>>can't implement fixed port and share console.
> > > > >>Would you elaborate on that? Usually sshd listens to a fixed
> > > > >>port
> > > > >>22,
> > > > >>and allows multiple users to have independet shells. What do
> > > > >>you
> > > > >>mean by
> > > > >>"share console"?
> > > > >>
> > > > >>>Current implement
> > > > >>>we can do anything that what we want.
> > > > >>Yes, it is completely under our control, but there are down
> > > > >>sides, too:
> > > > >>we have to maintain another process, and another entry point,
> > > > >>instead of
> > > > >>configuring a universally-used, well maintained and debugged
> > > > >>application.
> > > > >Think of the security implications of having another remote
> > > > >shell
> > > > >access point to a host.  I'd much rather trust sshd if we can
> > > > >make
> > > > >it
> > > > >work.
> > > > >
> > > > >
> > > > >>Dan.
> > > > 
> > > > At first glance, the standard sshd on the host is stronger and
> > > > more
> > > > robust than a custom ssh server, but the risk using the host
> > > > sshd
> > > > is
> > > > high. If we implement this feature via host ssd, when a hacker
> > > > attacks the sshd successfully, he will get access to the host
> > > > shell.
> > > > After all, the custom ssh server is not for accessing host
> > > > shell,
> > > > but just for forwarding the data from the guest console (a host
> > > > /dev/pts/X device). If we just use a custom ssh server, the
> > > > code in
> > > > this server only does 1. auth, 2. data forwarding, when the
> > > > hacker
> > > > attacks, he just gets access to that virtual machine. Notice
> > > > that
> > > > there is no code written about login to the host in the custom
> > > > ssh
> > > > server, and the custom ssh server can be protected under
> > > > selinux,
> > > > only allowing it to access /dev/pts/X.
> > > > 
> > > > In fact using a custom VNC server in qemu is as risky as a
> > > > custom
> > > > ssh server in vdsm. If we accepts the former one, then I can
> > > > accepts
> > > > the latter one. The consideration is how robust of the custom
> > > > ssh
> > > > server, and the difficulty to maintain it. In He Jie's current
> > > > patch, the ssh auth and transport library is an open-source
> > > > third-party project, unless the project is well maintained and
> > > > well
> > > > proven, using it can be risky.
> > > > 
> > > > So my opinion is using neither the host sshd, nor a custom ssh
> > > > server. Maybe we can apply the suggestion from Dan Yasny,
> > > > running a
> > > > standard sshd in a very small VM in every host, and forward
> > > > data
> > > > from this VM to other guest consoles. The ssh part is in the
> > > > VM,
> > > > then our work is just forwarding data from the VM via virto
> > > > serial
> > > > channels, to the guest via the pty.
> > > 
> > > I really dislike the idea of a service VM for something as
> > > fundamental as a VM
> > > console.  The logistics of maintaining such a VM are a nightmare:
> > > provisioning,
> > > deployment, software upgrades, HA, etc.
> > 
> > Why? It really sounds like an easy path to me - provisioning of a
> > virtual
> > appliance is supposed to be simple, upgrades not necessary - same
> > as with
> > ovirt-node, just a bit of config files preserved and the rest
> > simply replaced,
> > and HA is taken care of by the platform
> 
> How do you get the VM image to the hypervisor in the first place? 

Place appliance in an export domain, import into the setup as a VM. 

> Is
> this an
> extra step at install time that the admin must follow?  

Import the appliance, start it up, enter some initial config options 
(basically, just a few steps to hook it to the engine)

> You say that
> the VM is
> simple and will not need to be upgraded but I don't completely
> believe you.
> Inevitably, we will need to upgrade that VM (to fix a bug someone
> finds, or sync
> it up with the latest vdsm/engine code, or fix a security flaw).  How
> will we
> conduct that upgrade?  

I don't say it won't need to be upgraded, I am suggesting a process similar to 
what happens with ovirt-node is implemented

> How do we handle a host going in and out of
> Maintenance
> mode?

The appliance, just like any other VM, will migrate to another host, and as 
other VMs migrate as well, will provide a console for them. At first, it is 
probably not too critical to keep the sessions running, reconnecting should be 
good enough.

As for console access, the appliance can talk to the engine API's MLA calls, to 
see which user has access to which VMs' consoles, but that's details already

> 
> > 
> > On the other hand, maintaining this on multiple hypervisors means
> > they should
> > all be up to date, compliant and configured. Not to mention the
> > security
> > implications of maintaining an extra access point on lots of
> > machines vs a
> > single virtual appliance VM. Bandwidth can be an issue, but I doubt
> > serial
> > console traffic can be that heavy especially since it's there for
> > admin access
> > and not routine work
> 
> Don't we already want hypervisors to be up to date, compliant, and
> configured?

we do, but having spent the last umpteen years supporting systems that contain 
multiple servers, you don't really expect this to be happening all the time, 
everywhere, and try to keep things as smooth as possible within these 
constraints. One of the larger datacenters I've worked with a few years ago, 
had a maintenance window every 18 months, dedicated for all upgrades and 
updates, for example. 

> Allowing serial console access will add complexity in one way or
> another.  In my
> opinion it would be simpler to support a streaming service than a
> service VM.

On every hypervisor, instead of a single proxy? I still can't see how, no irony 
intended, I really want to understand why you consider having an extra service 
on every hypervisor, less complex that having this service in a single VM.

> 
> Are there any other uses for a service VM that could justify its
> complexity?

I could think of quite a few actually. Like a universal scheduler appliance 
that has an easy to script/program facility to orchestrate common API tasks. Or 
a Proxy for spice connections, or a separated set of VMs providing the engine 
services (for once the engine is actually modularized of course)... 

Once we know we have a suprlus of compute resources that can be used to provide 
interesting features, where each feature can be in a separate small VM, 
deployed, or not, according to requirements, ideas just keep flooding in - any 
datacentre and infrastructure service can become an appliance - 
DNS/DHCP/MTA/OpenFlow controllers/$openstack_module_name/etc. I do get carried 
away here, but looking at the avocent serial console appliance, it does look 
like a nice solution, easily deployed and maintained

> 
> > Am I missing a point here?
> > 
> > > 
> > > Maybe we can start simple and provide console access locally
> > > only.  What
> > > sort of functionality would the vdsm api need to provide to
> > > enable only
> > > local access to the console?  Presumably, it would set up a
> > > connection and
> > > provide the user with a port/pty to use to connect locally.  For
> > > now it
> > > would be "BYOSSH - bring your own SSH" as clients would need to
> > > access the
> > > hosts with something like:
> > > 
> > > ssh -t <host> "<connect command>"
> > > 
> > > The above command could be wrapped in a vdsm-tool command.
> > > 
> > > In the future, we can take a look at extending this feature via
> > > some sort of
> > > remote streaming API.  Keep in mind that in order for this
> > > feature to be
> > > truly useful to ovirt-engine consumers, the console connection
> > > must survive
> > > a VM migration.  To me, this means that vdsm will need to
> > > implement a
> > > generic streaming API like libvirt has.
> > > 
> > > -- Adam Litke <a...@us.ibm.com> IBM Linux Technology Center
> > > 
> > > _______________________________________________ vdsm-devel
> > > mailing list
> > > vdsm-devel@lists.fedorahosted.org
> > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> > > 
> > 
> > --
> > 
> > 
> > 
> > Regards,
> > 
> > Dan Yasny Red Hat Israel +972 9769 2280
> > 
> 
> --
> Adam Litke <a...@us.ibm.com>
> IBM Linux Technology Center
> 
> 

-- 



Regards, 

Dan Yasny 
Red Hat Israel 
+972 9769 2280
_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to