Re: [Qemu-devel] Re: [libvirt] Supporting hypervisor specific APIs in libvirt

2010-03-25 Thread Gildas Le Nadan

Anthony Liguori wrote:

On 03/25/2010 03:26 AM, Vincent Hanquez wrote:

On 24/03/10 21:40, Anthony Liguori wrote:

If so, what C clients you expected beyond libvirt?


Users want a C API.  I don't agree that libvirt is the only C 
interface consumer out there.


(I've seen this written too many times ...)
How do you know that ? did you do a poll or something where *actual* 
users vote/tell ?


From my point of view, i wouldn't want to write a high level 
management toolstack in C, specially
since the API is well defined JSON which is easily available in all 
high level language out there.


There's a whole world of C based management toolstacks (CIM).

Regards,

Anthony Liguori


That huge companies have developped over-complicated c-based management 
toolstacks doesn't necessarily mean that this is the best way to do it. 
It just mean that they have enough qualified ressources to do it.


A simple, language-neutral API is always preferable in my opinion, since 
it lowers the prerequisites/entry price, and will allow more people to 
use it, including system engineers.


Ensuring that the new API will be easy to use by new comers will also 
ensure that it will be easy to use by existing stacks including libvirt.


Also I second Avi's opinion in another mail that all command line 
options [should] have qmp equivalents: it is vital for 
flexibility/manageability to be able to programatically change setups 
after a VM was started.


To quote the automation part of the James White Manifesto[1], a 
document that is gaining a lot of traction in the sysadmin/devops community:

The provided API must have all functionality that the application provides.
The provided API must be tailored to more than one language and platform.

Regards,
Gildas

--
[1] You can find a copy of it here: 
http://www.kartar.net/2010/03/james-whites-rules-for-infrastructure/





Re: [Qemu-devel] Re: Spice project is now open

2009-12-13 Thread Gildas Le Nadan

Dan Berrange and I have been talking about being able to move VNC
server into a central process such that all of the VMs can have a
single VNC port that can be connected to.  This greatly simplifies
the firewalling logic that an administrator has to deal with.
That's a problem I've already had to deal with for our management
tools.  We use a private network for management and we bridge the
VNC traffic into the customers network so they can see the VGA
session.  But since that traffic can be a large range of ports and
we have to tunnel the traffic through a central server to get into
the customer network, it's very difficult to setup without opening
up a mess of ports.  I think we're currently opening a few thousand
just for VNC.


Actually my plan was to have a VNC proxy server, that sat between the
 end user  the real VNC in QEMU. Specifically I wanted to allow for
a model where the VNC server end users connected to for console
servers was on a physically separate host from the VMs. I had a
handful of use cases, mostly to deal with an oVirt deployment where
console users could be from the internet, rather than an intranet.

- Avoiding the need to open up many ports on firewalls - Allow on the
fly switching between any VMs the currently authenticated user was
authorized to view without opening more connections (avoids needing
to re-authenticate for each VM) - Avoid needing to expose
virtualization hosts to console users, since console users may be
coming in from an untrusted network, or even the internet itself. -
Allow seemless migration where proxy server simply re-connects to the
VM on new host, without the end user VNC connection ever noticing.


Hi,

Having a single well known port to connect to the VMs on an host would
be *awesome* as having one port per VM is sooo 1980's in terms of
manageability/secureability.

Actually both use cases described above are needed IMO:

- it would be great to have some sort of server running locally on the 
VM host so that you only have one open port on the VM host itself [1].


- it would be very usefull to have some sort of proxy mechanism that 
would allow redirection from a single host acting as a gateway between 
networks (be it internal or external networks).


The two of them could interact nicely with one connecting to the other 
if needed:


___VM_Host_proxy_Host_ _client_machine_
|1   | 2  |  |  3  |  |
|VM - local server |- |   proxy  | - |  client  |
|||__| |__|

I think one of the most important thing is that the proxy and the local 
server must behave exactly the same way and provide the same features 
(that is the protocol used on connection 2 should be the same as the one 
for connection 3).


This allow the client to work the same independently of the 
configuration/topology. The proxy allows for enterprise class features 
whereas the local server is enough for a small virtual infrastructure or 
even a single machine setup, but that should be transparent to the clients.


I guess there must exist a way to route the connection to the correct 
VM, so there should be something similar to the HTTP Host request-header 
to identify the VM the client wants to connect to [2].


Of course SASL support is mandatory but I guess Dan planned it anyway :)

Ideally there would be some sort of negociation mechanism so the client 
can ask what protocol (vnc, spice, ...) they want to use (if possible 
dynamically within a session if a user change its location/if its need 
evolve).


Obviously both shouldn't need to be run as root.

I guess the proxy should be a project on his own and not part of qemu 
since it is more of an enterprise feature while the local server could 
be added to qemu? Of course since there are some common features, it 
would probably make sense to share some code.


Concerning the proxy, there are all sorts of goodies I would expect from 
it (on top of seamless migration and on-the-fly switching) :

- flexibility in the choice of the authorization backend
- since it is a SPOF, it would be nice for it to work as a cluster (in 
active/passive with failover or better as an active/active cluster)


Since this proxy can be used as a connection broker to either physical 
or virtual machines, load-balancing and session management features 
would be nice.


Of course in a perfect world this wouldn't be a single company project ;)

My humble 2 cents as an operations person,
Gildas

--
[1] firewalling is also important on internal networks if you work in a 
large environment, and having a single port makes it easier to 
understand what is going on when diagnosing issues.


[2] You also need to point to the correct host. How you want to 
publish the VM/VM Host association so the proxy can route is an 
interesting problem on its own, especially since VM can (and will) 
migrate. There are probably many ways to do this including DNS SRV, 
LDAP, local