On 2012年10月12日 21:10, Adam Litke wrote:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
* Dan Kenigsberg <dan...@redhat.com> [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:

   I submited a patch for text-based console

the issue I want to discussing as below:
1. fix port VS dynamic port

Use fix port for all VM's console. connect console with 'ssh
vmUUID@ip -p port'.
Distinguishing VM by vmUUID.

   The current implement was vdsm will allocated port for console
dynamically and spawn sub-process when VM creating.
In sub-process the main thread responsible for accept new connection
and dispatch output of console to each connection.
When new connection is coming, main processing create new thread for
each new connection. Dynamic port will allocated
port for each VM and use range port. It isn't good for firewall rules.

   so I got a suggestion that use fix port. and connect console with
'ssh vmuuid@hostip -p fixport'. this is simple for user.
We need one process for accept new connection from fix port and when
new connection is coming, spawn sub-process for each vm.
But because the console only can open by one process, main process
need responsible for dispatching console's output of all vms and all
So the code will be a little complex then dynamic port.

   So this is dynamic port VS fix port and simple code VS complex code.
>From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable
remote console access.  If your initial implementation limits console access to
one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler
idea. Besides the firewall issue, there's user experience: instead of
calling getVmStats to tell the vm port, and then use ssh, only one ssh
call is needed. (Taking this one step further - it would make sense to
add another layer on top, directing console clients to the specific host
currently running the Vm.)

I did not take a close look at your implementation, and did not research
this myself, but have you considered using sshd for this? I suppose you
can configure sshd to collect the list of known "users" from
`getAllVmStats`, and force it to run a command that redirects VM's
console to the ssh client. It has a potential of being a more robust
I have considered using sshd and ssh tunnel. They
can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22,
and allows multiple users to have independet shells. What do you mean by
"share console"?

Current implement
we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too:
we have to maintain another process, and another entry point, instead of
configuring a universally-used, well maintained and debugged
Think of the security implications of having another remote shell
access point to a host.  I'd much rather trust sshd if we can make it

At first glance, the standard sshd on the host is stronger and more
robust than a custom ssh server, but the risk using the host sshd is
high. If we implement this feature via host ssd, when a hacker
attacks the sshd successfully, he will get access to the host shell.
After all, the custom ssh server is not for accessing host shell,
but just for forwarding the data from the guest console (a host
/dev/pts/X device). If we just use a custom ssh server, the code in
this server only does 1. auth, 2. data forwarding, when the hacker
attacks, he just gets access to that virtual machine. Notice that
there is no code written about login to the host in the custom ssh
server, and the custom ssh server can be protected under selinux,
only allowing it to access /dev/pts/X.

In fact using a custom VNC server in qemu is as risky as a custom
ssh server in vdsm. If we accepts the former one, then I can accepts
the latter one. The consideration is how robust of the custom ssh
server, and the difficulty to maintain it. In He Jie's current
patch, the ssh auth and transport library is an open-source
third-party project, unless the project is well maintained and well
proven, using it can be risky.

So my opinion is using neither the host sshd, nor a custom ssh
server. Maybe we can apply the suggestion from Dan Yasny, running a
standard sshd in a very small VM in every host, and forward data
from this VM to other guest consoles. The ssh part is in the VM,
then our work is just forwarding data from the VM via virto serial
channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM
console.  The logistics of maintaining such a VM are a nightmare: provisioning,
deployment, software upgrades, HA, etc.

Maybe we can start simple and provide console access locally only.  What sort of
functionality would the vdsm api need to provide to enable only local access to
the console?  Presumably, it would set up a connection and provide the user with
a port/pty to use to connect locally.  For now it would be "BYOSSH - bring your
own SSH" as clients would need to access the hosts with something like:

ssh -t <host> "<connect command>"

The above command could be wrapped in a vdsm-tool command.

In the future, we can take a look at extending this feature via some sort of
remote streaming API.  Keep in mind that in order for this feature to be truly
useful to ovirt-engine consumers, the console connection must survive a VM
migration.  To me, this means that vdsm will need to implement a generic
streaming API like libvirt has.

Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?

If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine,
or standalone, or virtual machine. I think its' working flow as below:

1. user request open console to engine.
2. engine setTicket(uuid, ticket, hostofvm) to consoleProxy.
    consoleProxy need provide api to engine.
3. engine return ticket to user.
4. user 'ssh UUID@consoleProxy' with ticket.
5. consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'.
the host of running consoleProxy should have certificates of all vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol.
   Same with currently implement. we can use system sshd or paramiko.
If we use paramiko, it almost reuse the code of consoleServer that I have already writen.

After vm migrated:
1. engine tell consoleProxy that vm was migrated.
    I guess engine can know vm finished migration?
And engine how to push the event of vm finished migration to consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem?
2. consoleProxy kill 'virsh console'.
3. reconnect to new host of vm with 'virsh console' again.
There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too.
4. continue redirect 'virsh console'.

Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or
virtual machine now.

A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.

Is this make sense?

vdsm-devel mailing list

Reply via email to