Re: [openstack-dev] [nova] Nova and LVM thin support

2014-04-21 Thread Luohao (brian)
Just for live snapshot, my understanding is the instance state will not be 
saved.

From: Cristian Tomoiaga [mailto:ctomoi...@gmail.com]
Sent: Sunday, April 20, 2014 6:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Nova and LVM thin support

Hello everyone,

Before going any further with my implementation I would like to ask the 
community about the LVM thin support in Nova (not Cinder).
The current implementation of the LVM backend does not support thin LVs.
Does anyone believe it is a good idea to add support for this in Nova ? (I plan 
on adding support for my implementation anyway).
I would also like to know where Red Hat stands on this, since they are 
primarily working on LVM.
I've seen that LVM thin would be supported in RHEL 7 (?) so we may consider the 
thin target stable enough for production in Juno (cinder already has support 
for this since last year).

I know there was ongoing work to bring a common storage library implementation 
to oslo or nova directly (Cinder's Brick library) but I heard nothing new for 
some time now. Maybe John Griffith has some thoughts on this.

The reasons why support for LVM thin would be a nice addition should be well 
known especially to people working with LVM.

Another question is related to how Nova treats snapshots when LVM is used as a 
backend (I hope I didn't miss anything in the code):
Right now if we can't do a live snapshot, the instance state (memory) is being 
saved (libvirt virDomainManagedSave) and qemu-img is used to backup the 
instance disk(s). After that we resume the instance.
Can we insert code to snapshot the instance disk so we only keep the instance 
offline just for a memory dump and copy the disk content from the snapshot 
created ?

--
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficient image cloning implementation in NetApp nfs drivers // make this part of base NFS driver

2014-04-14 Thread Luohao (brian)
Nice idea.

Actually, fast image cloning has been widely supported by most NAS devices, and 
VMware VAAI also started to require this criteria many years ago.

However, I am not quite sure what exactly need to put into the base NFS driver, 
anyways, the fast cloning api will vary for specific vendors.

-Hao

From: Nilesh P Bhosale [mailto:nilesh.bhos...@in.ibm.com]
Sent: Monday, April 14, 2014 1:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Efficient image cloning implementation in NetApp nfs 
drivers // make this part of base NFS driver

Hi All,

I was going through the following blue print, NetApp proposed and implemented 
in its driver (NetAppNFSDriver - cinder/volume/drivers/netapp/nfs.py ) a while 
back (changehttps://review.openstack.org/#/c/41868/):
https://blueprints.launchpad.net/cinder/+spec/netapp-cinder-nfs-image-cloning

It looks quite an interesting and valuable feature for the end customers.
Can we make it part of the base NfsDriver (cinder/volume/drivers/nfs.py)? so 
that the customers using the base NFS driver can benefit and also other drivers 
inheriting from this base NFS driver (e.g. IBMNAS_NFSDriver, NexentaNfsDriver) 
can also benefit.

Please let me know your valuable opinion.
I can start a blueprint for the Juno release.

Thanks,
Nilesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-10 Thread Luohao (brian)
Is it a same thing like openstack-neat project?

http://openstack-neat.org/

I am curious about why Neat was not accepted previously.

-Hao

From: Oleg Gelbukh [mailto:ogelb...@mirantis.com]
Sent: Thursday, April 10, 2014 3:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Dynamic scheduling

Hello, Jay,

As a fork of nova-scheduler, Gantt most likely will handle initial placement. 
However, even nova-scheduler now supports some runtime operations (for example, 
scheduling of evacuated/migrated instances).

Given the runtime scheduling arises in this list regularly, I guess such 
features will make their way into Scheduler service eventually.

--
Best regards,
Oleg Gelbukh

On Wed, Apr 9, 2014 at 7:47 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:
@Oleg, Till now, I'm not sure the target of Gantt, is it for initial placement 
policy or run time policy or both, can you help clarify?
@Henrique, not sure if you know IBM PRS (Platform Resource Scheduler) [1], we 
have finished the dynamic scheduler in our Icehouse version (PRS 2.2), it has 
exactly the same feature as your described, we are planning a live demo for 
this feature in Atlanta Summit. I'm also writing some document for run time 
policy which will cover more run time policies for OpenStack, but not finished 
yet. (My shame for the slow progress). The related blueprint is [2], you can 
also get some discussion from [3]

[1] 
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ANsubtype=CAhtmlfid=897/ENUS213-590appname=USN
[2] https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
[3] http://markmail.org/~jaylau/OpenStack-DRS
Thanks.

2014-04-09 23:21 GMT+08:00 Oleg Gelbukh 
ogelb...@mirantis.commailto:ogelb...@mirantis.com:

Henrique,

You should check out Gantt project [1], it could be exactly the place to 
implement such features. It is a generic cross-project Scheduler as a Service 
forked from Nova recently.

[1] https://github.com/openstack/gantt

--
Best regards,
Oleg Gelbukh
Mirantis Labs

On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta 
henriquecostatr...@gmail.commailto:henriquecostatr...@gmail.com wrote:

Hello, everyone!


I am currently a graduate student and member of a group of contributors to 
OpenStack. We believe that a dynamic scheduler could improve the efficiency of 
an OpenStack cloud, either by rebalancing nodes to maximize performance or to 
minimize the number of active hosts, in order to minimize energy costs. 
Therefore, we would like to propose a dynamic scheduling mechanism to Nova. The 
main idea is using the Ceilometer information (e.g. RAM, CPU, disk usage) 
through the ceilometer-client and dinamically decide whether a instance should 
be live migrated.


This might me done as a Nova periodic task, which will be executed every once 
in a given period or as a new independent project. In both cases, the current 
Nova scheduler will not be affected, since this new scheduler will be 
pluggable. We have done a search and found no such initiative in the OpenStack 
BPs. Outside the community, we found only a recent IBM announcement for a 
similiar feature in one of its cloud products.


A possible flow is: In the new scheduler, we periodically make a call to Nova, 
get the instance list from a specific host and, for each instance, we make a 
call to the ceilometer-client (e.g. $ ceilometer statistics -m cpu_util -q 
resource=$INSTANCE_ID) and then, according to some specific parameters 
configured by the user, analyze the meters and do the proper migrations.


Do you have any comments or suggestions?

--
Ítalo Henrique Costa Truta


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV and IOMMU check

2014-04-01 Thread Luohao (brian)
According the recent PCI passthru and SR-IOV design wiki as follows, my 
understanding is the pci filter resides on a compute node needs to be aware of 
VFIO to determine which devices can satisfy user requests. Feel free to correct 
me if it is wrong. 

https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support


- Hao 
-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Tuesday, April 01, 2014 1:58 PM
To: Luohao (brian)
Cc: OpenStack Development Mailing List (not for usage questions); Jinbo (Justin)
Subject: Re: [openstack-dev] [nova] SR-IOV and IOMMU check

On Tue, Apr 01, 2014 at 04:59:34AM +, Luohao (brian) wrote:
 Now, VFIO hasn't been made generally supported by most enterprise 
 linux distributions, and as I know, the current pci passthrough 
 /SR-IOV implementation is still based on a historical approach.
 
 Probably we can consider the switch to VFIO framework in later releases.

Actually, libvirt will automatically use VFIO if it is available on the host OS 
by default. This is the case with any recent Fedora and will thus be the case 
with forthcoming RHEL-7. So nova wouldn't have todo anything to enable use of 
VFIO - it is supposed to just work
when available.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV and IOMMU check

2014-03-31 Thread Luohao (brian)
Now, VFIO hasn't been made generally supported by most enterprise linux 
distributions, and as I know, the current pci passthrough /SR-IOV 
implementation is still based on a historical approach.

Probably we can consider the switch to VFIO framework in later releases.

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Monday, March 31, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Jinbo (Justin)
Subject: Re: [openstack-dev] [nova] SR-IOV and IOMMU check

On Fri, Mar 28, 2014 at 11:14:49PM -0400, Steve Gordon wrote:
 - Original Message -
  This is the approach mentioned by linux-kvm.org
  
  http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM
  
  3. reboot and verify that your system has IOMMU support
  
  AMD Machine
  dmesg | grep AMD-Vi
   ...
   AMD-Vi: Enabling IOMMU at :00:00.2 cap 0x40
   AMD-Vi: Lazy IO/TLB flushing enabled
   AMD-Vi: Initialized for Passthrough Mode
   ...
  Intel Machine
  dmesg | grep -e DMAR -e IOMMU
   ...
   DMAR:DRHD base: 0x00feb03000 flags: 0x0
   IOMMU feb03000: ver 1:0 cap c9008020e30260 ecap 1000
   ...
 
 Right, but the question is whether grepping dmesg is an 
 acceptable/stable API to be relying on from the Nova level. Basically 
 what I'm saying is the reason there isn't a robust way to check this 
 from OpenStack is that there doesn't appear to be a robust way to check this 
 from the kernel?

Historically there was no good way to determine this from the kernel.
Dmesg logs were the best there is.  With new style VFIO, however, we can now 
reliably determine the level of support and even more importantly what PCI 
devices must be handled together as a group.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: New stackforge project: Openstackdroid

2014-03-30 Thread Luohao (brian)
It’s a nice idea.

I am not quite familiar with android native app development,  but I have some 
experience on mobile webapps based on jqm and phonegap.

I am a little curious to know whether Openstackdroid can use a mobile webapp 
infrastructure so it is portable across different mobile platforms?

-Hao

发件人: Ricardo Carrillo Cruz [mailto:ricardo.carrillo.c...@gmail.com]
发送时间: 2014年3月31日 0:01
收件人: openstack-dev@lists.openstack.org
主题: [openstack-dev] New stackforge project: Openstackdroid

Hello guys

I'd like to let you know about my humble code contribution to Openstack, an 
Android application to access Openstack clouds:

http://git.openstack.org/cgit/stackforge/openstackdroid/
https://launchpad.net/openstackdroid

It's currently quite alpha, it can login to Openstack clouds and access data 
with your user/pass/tenant ID, no write operations yet.
I encourage developers to grab the code, hack it and send out suggestions, 
patches  or whatever you may think of :-) .

Kind regards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV and IOMMU check

2014-03-28 Thread Luohao (brian)
This is the approach mentioned by linux-kvm.org

http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

3. reboot and verify that your system has IOMMU support

AMD Machine
dmesg | grep AMD-Vi
 ...
 AMD-Vi: Enabling IOMMU at :00:00.2 cap 0x40
 AMD-Vi: Lazy IO/TLB flushing enabled
 AMD-Vi: Initialized for Passthrough Mode
 ...
Intel Machine
dmesg | grep -e DMAR -e IOMMU
 ...
 DMAR:DRHD base: 0x00feb03000 flags: 0x0
 IOMMU feb03000: ver 1:0 cap c9008020e30260 ecap 1000
 ...



-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Saturday, March 29, 2014 3:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] SR-IOV and IOMMU check

- Original Message -
 Hi, all
 
 Currently openstack can support SR-IOV device pass-through (at least 
 there are some patches for this), but the prerequisite to this is both 
 IOMMU and SR-IOV must be enabled correctly, it seems there is not a 
 robust way to check this in openstack, I have implemented a way to do 
 this and hope it can be committed into upstream, this can help find 
 the issue beforehand, instead of letting kvm report the issue no 
 IOMMU found until the VM is started. I didn't find an appropriate 
 place to put into this, do you think this is necessary? Where can it 
 be put into? Welcome your advice and thank you in advance.

What's the mechanism you are using on the host side to determine that IOMMU is 
supported/enabled?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Luohao (brian)
1.  fsfreeze with vss has been added to qemu upstream, see 
http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg01963.html for usage.
2.  libvirt allows a client to send any commands to qemu-ga, see 
http://wiki.libvirt.org/page/Qemu_guest_agent
3.  linux fsfreeze is not equivalent to windows fsfreeze+vss. Linux fsreeze 
offers fs consistency only, while windows vss allows agents like sqlserver to 
register their plugins to flush their cache to disk when a snapshot occurs.
4.  my understanding is xenserver does not support fsfreeze+vss now, because 
xenserver normally does not use block backend in qemu.

-Original Message-
From: Bruce Montague [mailto:bruce_monta...@symantec.com] 
Sent: Thursday, March 13, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a within-guest 
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work 
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via 
libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an 
equivalent to VSS on Linux systems, was that done?  If so, could an OpenStack 
API provide a generic quiesce request that would then get passed to libvirt? 
(Also, the XenServer VSS support seems different than qemu/KVM's, is this true? 
Can it also be accessed through libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


 On 12/mar/2014, at 20:45, Bruce Montague bruce_monta...@symantec.com 
 wrote:


 Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
 the following list sketches some speculative OpenStack DR use cases. These 
 use cases do not reflect any specific product behavior and span a wide 
 spectrum. This list is not a proposal, it is intended primarily to solicit 
 additional discussion. The first basic use case, (1), is described in a bit 
 more detail than the others; many of the others are elaborations on this 
 basic theme.



 * (1) [Single VM]

 A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
 Services) installed runs a key application and integral database. VSS can 
 quiesce the app, database, filesystem, and I/O on demand and can be invoked 
 external to the guest.

   a. The VM's volumes, including the boot volume, are replicated to a remote 
 DR site (another OpenStack deployment).

   b. Some form of replicated VM or VM metadata exists at the remote site. 
 This VM/description includes the replicated volumes. Some systems might use 
 cold migration or some form of wide-area live VM migration to establish this 
 remote site VM/description.

   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
 volumes in an application-consistent state. This state is flushed all the way 
 through to the remote volumes. As each remote volume reaches its 
 application-consistent state, this is recognized in some fashion, perhaps by 
 an in-band signal, and a snapshot of the volume is made at the remote site. 
 Volume replication is re-enabled immediately following the snapshot. A backup 
 is then made of the snapshot on the remote site. At the completion of this 
 cycle, application-consistent volume snapshots and backups exist on the 
 remote site.

   d.  When a disaster or firedrill happens, the replication network 
 connection is cut. The remote site VM pre-created or defined so as to use the 
 replicated volumes is then booted, using the latest application-consistent 
 state of the replicated volumes. The entire VM 

[openstack-dev] 答复: [OSSN] Live migration instructions recommend unsecured libvirt remote access

2014-03-07 Thread Luohao (brian)
Nathan, 

I have another idea of allowing vm migration without need for remote access to 
libvirt daemon between compute servers.

As I know, vm migration data path can be independent with libvirt daemon. For 
example, libvirt supports delegating vm migration to the hypervisor. When a vm 
migration is required, nova can prepare an ephemeral migration service on the 
destination node, and then launch the connection from source node to 
destination node to perform the migration. All these can be done by local 
libvirt calls on different compute nodes.

-Hao
 
-邮件原件-
发件人: Nathan Kinder [mailto:nkin...@redhat.com] 
发送时间: 2014年3月7日 3:36
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [OSSN] Live migration instructions recommend unsecured 
libvirt remote access

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Live migration instructions recommend unsecured libvirt remote access
- ---

### Summary ###
When using the KVM hypervisor with libvirt on OpenStack Compute nodes, live 
migration of instances from one Compute server to another requires that the 
libvirt daemon is configured for remote network connectivity.
The libvirt daemon configuration recommended in the OpenStack Configuration 
Reference manual configures libvirtd to listen for incoming TCP connections on 
all network interfaces without requiring any authentication or using any 
encryption.  This insecure configuration allows for anyone with network access 
to the libvirt daemon TCP port on OpenStack Compute nodes to control the 
hypervisor through the libvirt API.

### Affected Services / Software ###
Nova, Compute, KVM, libvirt, Grizzly, Havana, Icehouse

### Discussion ###
The default configuration of the libvirt daemon is to not allow remote access.  
Live migration of running instances between OpenStack Compute nodes requires 
libvirt daemon remote access between OpenStack Compute nodes.

The libvirt daemon should not be configured to allow unauthenticated remote 
access.  The libvirt daemon  has a choice of 4 secure options for remote access 
over TCP.  These options are:

 - SSH tunnel to libvirtd's UNIX socket
 - libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption
 - libvirtd TCP socket, with TLS for encryption and x.509 client
   certificates for authentication
 - libvirtd TCP socket, with TLS for encryption and Kerberos for
   authentication

It is not necessary for the libvirt daemon to listen for remote TCP connections 
on all interfaces.  Remote network connectivity to the libvirt daemon should be 
restricted as much as possible.  Remote access is only needed between the 
OpenStack Compute nodes, so the libvirt daemon only needs to listen for remote 
TCP connections on the interface that is used for this communication.  A 
firewall can be configured to lock down access to the TCP port that the libvirt 
daemon listens on, but this does not sufficiently protect access to the libvirt 
API.  Other processes on a remote OpenStack Compute node might have network 
access, but should not be authorized to remotely control the hypervisor on 
another OpenStack Compute node.

### Recommended Actions ###
If you are using the KVM hypervisor with libvirt on OpenStack Compute nodes, 
you should review your libvirt daemon configuration to ensure that it is not 
allowing unauthenticated remote access.

Remote access to the libvirt daemon via TCP is configured by the listen_tls, 
listen_tcp, and auth_tcp configuration directives.  By default, these 
directives are all commented out.  This results in remote access via TCP being 
disabled.

If you do not need remote libvirt daemon access, you should ensure that the 
following configuration directives are set as follows in the 
/etc/libvirt/libvirtd.conf configuration file.  Commenting out these directives 
will have the same effect, as these values match the internal
defaults:

-  begin example libvirtd.conf snippet  listen_tls = 1 listen_tcp = 0 
auth_tcp = sasl
-  end example libvirtd.conf snippet 

If you need to allow remote access to the libvirt daemon between OpenStack 
Compute nodes for live migration, you should ensure that authentication is 
required.  Additionally, you should consider enabling TLS to allow remote 
connections to be encrypted.

The following libvirt daemon configuration directives will allow for 
unencrypted remote connections that use SASL for authentication:

-  begin example libvirtd.conf snippet  listen_tls = 0 listen_tcp = 1 
auth_tcp = sasl
-  end example libvirtd.conf snippet 

If you want to require TLS encrypted remote connections, you will have to 
obtain X.509 certificates and configure the libvirt daemon to use them to use 
TLS.  Details on this configuration are in the libvirt daemon documentation.  
Once the certificates are configured, you should set the following libvirt 
daemon configuration directives:

-  begin example libvirtd.conf snippet  listen_tls = 1 listen_tcp