Re: [libvirt] [PATCH] qemuOpenVhostNet: Decrease vhostfdSize on open failure

2013-05-27 Thread Michal Privoznik
On 24.05.2013 16:42, Laine Stump wrote:
 On 05/24/2013 08:50 AM, Michal Privoznik wrote:
 Currently, if there's an error opening /dev/vhost-net (e.g. because
 it doesn't exist) but it's not required we proceed with vhostfd array
 filled with -1 and vhostfdSize unchanged. Later, when constructing
 the qemu command line only non-negative items within vhostfd array
 are taken into account. This means, vhostfdSize may be greater than
 the actual count of non-negative items in vhostfd array. This results
 in improper command line arguments being generated, e.g.:

 -netdev tap,fd=21,id=hostnet0,vhost=on,vhostfd=(null)
 ---
  src/qemu/qemu_command.c | 12 ++--
  1 file changed, 6 insertions(+), 6 deletions(-)

 diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
 index 434f5a7..d969f88 100644
 --- a/src/qemu/qemu_command.c
 +++ b/src/qemu/qemu_command.c
 @@ -486,6 +486,8 @@ qemuOpenVhostNet(virDomainDefPtr def,
 but is unavailable));
  goto error;
  }
 +i--;
 +(*vhostfdSize)--;
 
 This will still go back through the loop again and try to open another.
 I would instead just set vhostfdSize = i (in case there were any
 successful opens) and break out of the loop.
 
 And again you'll need to decide what is an error and what gets just a
 warning - if someone asks for 3 queues and gets none, that should lead
 to continuing without any error or even warning. But if they request 2
 and only get one, is that an error, or do we warn and continue? or just
 silently continue? (I don't have the answer, I'm just asking the
 question :-)
 

Well, MQ is designed to be M:N (where M is the number of TAP devices =
queues, N is the number of vhost-nets). So I think we can continue with
any number of successful openings.

  }
  }
  virDomainAuditNetDevice(def, net, /dev/vhost-net, *vhostfdSize);
 @@ -6560,12 +6562,10 @@ qemuBuildInterfaceCommandLine(virCommandPtr cmd,
  }
  
  for (i = 0; i  vhostfdSize; i++) {
 -if (vhostfd[i] = 0) {
 -virCommandTransferFD(cmd, vhostfd[i]);
 -if (virAsprintf(vhostfdName[i], %d, vhostfd[i])  0) {
 -virReportOOMError();
 -goto cleanup;
 -}
 +virCommandTransferFD(cmd, vhostfd[i]);
 +if (virAsprintf(vhostfdName[i], %d, vhostfd[i])  0) {
 +virReportOOMError();
 +goto cleanup;
  }
  }
  
 
 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list
 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2] qemuOpenVhostNet: Decrease vhostfdSize on open failure

2013-05-27 Thread Michal Privoznik
Currently, if there's an error opening /dev/vhost-net (e.g. because
it doesn't exist) but it's not required we proceed with vhostfd array
filled with -1 and vhostfdSize unchanged. Later, when constructing
the qemu command line only non-negative items within vhostfd array
are taken into account. This means, vhostfdSize may be greater than
the actual count of non-negative items in vhostfd array. This results
in improper command line arguments being generated, e.g.:

-netdev tap,fd=21,id=hostnet0,vhost=on,vhostfd=(null)
---
 src/qemu/qemu_command.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 0373626..c4a162a 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -486,6 +486,10 @@ qemuOpenVhostNet(virDomainDefPtr def,
but is unavailable));
 goto error;
 }
+VIR_WARN(Unable to open vhost-net. Opened so far %d, requested 
%d,
+ i, *vhostfdSize);
+*vhostfdSize = i;
+break;
 }
 }
 virDomainAuditNetDevice(def, net, /dev/vhost-net, *vhostfdSize);
@@ -6560,12 +6564,10 @@ qemuBuildInterfaceCommandLine(virCommandPtr cmd,
 }
 
 for (i = 0; i  vhostfdSize; i++) {
-if (vhostfd[i] = 0) {
-virCommandTransferFD(cmd, vhostfd[i]);
-if (virAsprintf(vhostfdName[i], %d, vhostfd[i])  0) {
-virReportOOMError();
-goto cleanup;
-}
+virCommandTransferFD(cmd, vhostfd[i]);
+if (virAsprintf(vhostfdName[i], %d, vhostfd[i])  0) {
+virReportOOMError();
+goto cleanup;
 }
 }
 
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [test-API][PATCH] Add 2 emulatorpin cases cover config and live flags

2013-05-27 Thread Wayne Sun
- use pinEmulator to pin domain emulator to host cpu
- 2 cases cover config and live flags
- cpulist with '^', '-' and ',' is supported to give multiple
  host cpus

Related bug 916493:
pinEmulator and emulatorPinInfo should be simple with required params
https://bugzilla.redhat.com/show_bug.cgi?id=916493

is fixed, so the test can run successfully now.

Signed-off-by: Wayne Sun g...@redhat.com
---
 repos/setVcpus/emulatorpin_config.py | 97 +++
 repos/setVcpus/emulatorpin_live.py   | 98 
 2 files changed, 195 insertions(+)
 create mode 100644 repos/setVcpus/emulatorpin_config.py
 create mode 100644 repos/setVcpus/emulatorpin_live.py

diff --git a/repos/setVcpus/emulatorpin_config.py 
b/repos/setVcpus/emulatorpin_config.py
new file mode 100644
index 000..9b94f98
--- /dev/null
+++ b/repos/setVcpus/emulatorpin_config.py
@@ -0,0 +1,97 @@
+#!/usr/bin/env python
+# Test domain emulator pin with flag VIR_DOMAIN_AFFECT_CONFIG, check
+# domain config xml with emulatorpin configuration.
+
+import re
+from xml.dom import minidom
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+from utils import utils
+
+required_params = ('guestname', 'cpulist',)
+optional_params = {}
+
+def emulatorpin_check(domobj, cpumap):
+check domain config xml with emulatorpin element
+
+guestxml = domobj.XMLDesc(2)
+logger.debug(domain %s xml :\n%s %(domobj.name(), guestxml))
+
+doc = minidom.parseString(guestxml)
+emulatorpin = doc.getElementsByTagName('emulatorpin')
+if not emulatorpin:
+logger.error(no emulatorpin element in domain xml)
+return 1
+
+if not emulatorpin[0].hasAttribute('cpuset'):
+logger.error(no cpuset attribute with emulatorpin in domain xml)
+return 1
+else:
+emulator_attr = emulatorpin[0].getAttributeNode('cpuset')
+cpulist = emulator_attr.nodeValue
+cpumap_tmp = utils.param_to_tuple(cpulist, maxcpu)
+
+if cpumap_tmp == cpumap:
+logger.info(cpuset is as expected in domain xml)
+return 0
+else:
+logger.error(cpuset is not as expected in domain xml)
+return 1
+
+def emulatorpin_config(params):
+pin domain emulator to host cpu with config flag
+
+global logger
+logger = params['logger']
+params.pop('logger')
+guestname = params['guestname']
+cpulist = params['cpulist']
+
+logger.info(the name of virtual machine is %s % guestname)
+logger.info(the given cpulist is %s % cpulist)
+
+global maxcpu
+maxcpu = utils.get_host_cpus()
+logger.info(%s physical cpu on host % maxcpu)
+
+conn = sharedmod.libvirtobj['conn']
+
+try:
+domobj = conn.lookupByName(guestname)
+cpumap = utils.param_to_tuple(cpulist, maxcpu)
+
+if not cpumap:
+logger.error(cpulist: Invalid format)
+return 1
+
+logger.debug(cpumap for emulator pin is:)
+logger.debug(cpumap)
+
+logger.info(pin domain emulator to host cpulist %s with flag: %s %
+(cpulist, libvirt.VIR_DOMAIN_AFFECT_CONFIG))
+domobj.pinEmulator(cpumap, libvirt.VIR_DOMAIN_AFFECT_CONFIG)
+
+logger.info(check emulator pin info)
+ret = domobj.emulatorPinInfo(libvirt.VIR_DOMAIN_AFFECT_CONFIG)
+logger.debug(emulator pin info is:)
+logger.debug(ret)
+if ret == cpumap:
+logger.info(emulator pin info is expected)
+else:
+logger.error(emulator pin info is not expected)
+return 1
+except libvirtError, e:
+logger.error(libvirt call failed:  + str(e))
+return 1
+
+logger.info(check domain emulatorpin configuration in xml)
+ret = emulatorpin_check(domobj, cpumap)
+if ret:
+logger.error(domain emulator pin check failed)
+return 1
+else:
+logger.info(domain emulator pin check succeed)
+return 0
diff --git a/repos/setVcpus/emulatorpin_live.py 
b/repos/setVcpus/emulatorpin_live.py
new file mode 100644
index 000..08b7073
--- /dev/null
+++ b/repos/setVcpus/emulatorpin_live.py
@@ -0,0 +1,98 @@
+#!/usr/bin/env python
+# Test domain emulator pin with flag VIR_DOMAIN_AFFECT_LIVE, check
+# emulator process status under domain task list on host.
+
+import re
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+from utils import utils
+
+required_params = ('guestname', 'cpulist',)
+optional_params = {}
+
+def emulatorpin_check(guestname, cpumap):
+check emulator process status of the running virtual machine
+   grep Cpus_allowed_list /proc/PID/status
+
+tmp_str = ''
+cmd = cat /var/run/libvirt/qemu/%s.pid % guestname
+status, pid = utils.exec_cmd(cmd, shell=True)
+if status:
+logger.error(failed to get the pid of domain %s % guestname)
+return 1
+
+cmd = grep Cpus_allowed_list 

Re: [libvirt] two hostdev devices problem

2013-05-27 Thread Dominik Mostowiec
Hi,
Mabye that's the problem ? :
May 27 09:13:44 on-10-177-32-62 kernel: [140204.533733] ixgbe
:01:00.0 eth0: VF Reset msg received from vf 13
May 27 09:13:44 on-10-177-32-62 kernel: [140204.533970] ixgbe
:01:00.0: VF 13 has no MAC address assigned, you may have to
assign one manually
May 27 09:13:44 on-10-177-32-62 kernel: [140204.552220] ixgbe
:01:00.1 eth1: VF Reset msg received from vf 13
May 27 09:13:44 on-10-177-32-62 kernel: [140204.552228] ixgbe
:01:00.1: VF 13 has no MAC address assigned, you may have to
assign one manually
May 27 09:13:45 on-10-177-32-62 kernel: [140205.674551] ixgbe
:01:00.1 eth1: VF Reset msg received from vf 12
May 27 09:13:45 on-10-177-32-62 kernel: [140205.694783] pci-stub
:01:13.1: irq 288 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140205.726747] pci-stub
:01:13.1: irq 288 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140205.726766] pci-stub
:01:13.1: irq 289 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140205.786692] pci-stub
:01:13.1: irq 288 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140205.786702] pci-stub
:01:13.1: irq 289 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140205.786710] pci-stub
:01:13.1: irq 290 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140206.254300] ixgbe
:01:00.0 eth0: VF Reset msg received from vf 13
May 27 09:13:45 on-10-177-32-62 kernel: [140206.286346] pci-stub
:01:13.2: irq 407 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140206.306280] pci-stub
:01:13.2: irq 407 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140206.306293] pci-stub
:01:13.2: irq 408 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140206.354379] pci-stub
:01:13.2: irq 407 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140206.354393] pci-stub
:01:13.2: irq 408 for MSI/MSI-X
May 27 09:13:45 on-10-177-32-62 kernel: [140206.354403] pci-stub
:01:13.2: irq 409 for MSI/MSI-X
May 27 09:13:49 on-10-177-32-62 kernel: [140210.174615] ixgbe
:01:00.1 eth1: VF Reset msg received from vf 13
May 27 09:13:49 on-10-177-32-62 kernel: [140210.207755] pci-stub
:01:13.3: irq 410 for MSI/MSI-X
May 27 09:13:49 on-10-177-32-62 kernel: [140210.251553] pci-stub
:01:13.3: irq 410 for MSI/MSI-X
May 27 09:13:49 on-10-177-32-62 kernel: [140210.251568] pci-stub
:01:13.3: irq 411 for MSI/MSI-X
May 27 09:13:49 on-10-177-32-62 kernel: [140210.331491] pci-stub
:01:13.3: irq 410 for MSI/MSI-X
May 27 09:13:49 on-10-177-32-62 kernel: [140210.331506] pci-stub
:01:13.3: irq 411 for MSI/MSI-X
May 27 09:13:49 on-10-177-32-62 kernel: [140210.331518] pci-stub
:01:13.3: irq 412 for MSI/MSI-X


Regards
Dominik

2013/5/26 Dominik Mostowiec dominikmostow...@gmail.com:
 Hi,

 First - I just got confirmation from the libnl maintainer that the bug I
 mentioned causing problems with max_vfs  50 *is* an issue in libnl
 3.2.16 - it is fixed in libnl 3.2.22. Try to upgrade to that version and
 it should solve at least some of your problems.

 Upgrade libnl to version 3.2.22 helps.
 Thanks.

 1) when you have max_vfs = 10, you can assign the devices and mac
 addresses are set properly, i.e. everything works.
 2) when you have max_vfs = 35, you can assign the devices, but the mac
 addresses are not set properly.
 3) when you have max_vfs = 63, you can't assign the devices because you
 get this error:

 internal error missing IFLA_VF_INFO in netlink response

 Is this correct?

 Yes.
 Before upgrade libnl :-)
 Upgrede helps only for limit max_vfs.

 (BTW, it may get a bit confusing if you leave your networks named as
 vnet0 and vnet1 - although it's a different namespace, libvirt uses
 vnetN (where n is 0 - whatever) as the name for the tap device
 associated with each guest interface).

 You're right.
 I changed this.

 the output of ip link show dev $PF.
 Hmm,
 ip link show dev eth0
 Shows only:
 2: eth0: BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP mtu 1500 qdisc mq
 master bond0 state UP qlen 1000
 link/ether b8:ca:3a:5b:a6:3a brd ff:ff:ff:ff:ff:ff

 I see VFs like network interfaces eth2-eth127.
 When VF is attached to VM it disappear from list on host.

 --
 Regards
 Dominik


 2013/5/24 Laine Stump la...@laine.org:
 On 05/24/2013 09:16 AM, Dominik Mostowiec wrote:

 Do you have this problem with mac addresses when max_vfs is set to an
 even lower number?


 First - I just got confirmation from the libnl maintainer that the bug I
 mentioned causing problems with max_vfs  50 *is* an issue in libnl
 3.2.16 - it is fixed in libnl 3.2.22. Try to upgrade to that version and
 it should solve at least some of your problems.



 No,
 When max_fs=10, macs in VM are OK.


 Interesting.

 So now I'm curious about something - can you try adding a vlan tag to
 your networks and see if the vlan tag is set properly when max_vfs is a
 low number (10 or 7):

 network
   namevnet0/name
   uuidec49896a-a0b5-4944-a81f-9f0cdf578871/uuid
   vlan
 tag 

[libvirt] [PATCH] qemu: Fix build without gnutls

2013-05-27 Thread Jiri Denemark
error label in qemuMigrationCookieGraphicsAlloc is now used
unconditionally thanks to VIR_STRDUP.
---

Pushed as a build-breaker.

 src/qemu/qemu_migration.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 73ced73..19b1236 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -324,9 +324,7 @@ qemuMigrationCookieGraphicsAlloc(virQEMUDriverPtr driver,
 
 no_memory:
 virReportOOMError();
-#ifdef WITH_GNUTLS
 error:
-#endif
 qemuMigrationCookieGraphicsFree(mig);
 virObjectUnref(cfg);
 return NULL;
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Issues with qemu-nbd over AF_UNIX and virDomainAttachDevice

2013-05-27 Thread Paolo Bonzini
Il 26/05/2013 15:54, Deepak C Shetty ha scritto:
 3) In nbdxml, i tried removing port=.. it didn't give any error
 with regards to port beign mandatory.. which si good.. but the attach
 device still gives the same error as above
 
 Is the `nbdxml` I am using for attachign a qemu-nbd exported drive,
 correct ?

What version of QEMU is this?

Can you search the logs for the QMP commands related to the hotplug?

Paolo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] [PATCH v3 2/2] net: introduce command to query rx-filter information

2013-05-27 Thread Stefan Hajnoczi
On Sun, May 26, 2013 at 10:38:14AM +0300, Michael S. Tsirkin wrote:
 On Fri, May 24, 2013 at 04:00:42PM -0400, Luiz Capitulino wrote:
  On Fri, 24 May 2013 12:05:12 -0600
  Eric Blake ebl...@redhat.com wrote:
  
   On 05/24/2013 10:12 AM, Michael S. Tsirkin wrote:
   
Event message contains the net client name, management might only 
want
to query the single net client.
   
The client can do the filtering itself.
   
   
I'm not sure I buy the responsiveness argument.  Sure, the fastest I/O
is no I/O, but whether you read and parse 100 bytes or 1000 from a Unix
domain socket once in a great while shouldn't make a difference.
   
   And the time spent malloc'ing the larger message to send from qemu, as
   well as the time spent malloc'ing the libvirt side that parses the qemu
   string into C code for use, and the time spent strcmp'ing every entry to
   find the right one...
   
   It really IS more efficient to filter as low down in the stack as
   possible, once it is determined that filtering is desirable.
   
   Whether filtering makes a difference in performance is a different
   question - you may be right that always returning the entire list and
   making libvirt do its own filtering will still not add any more
   noticeable delay compared to libvirt doing a filtered query, if the
   bottleneck lies elsewhere (such as libvirt telling macvtap its new
   configration).
   
   
My main concern is to keep the external interface simple.  I'm rather
reluctant to have query commands grow options.
   
In a case where we need the give me everything query anyway, the 
give
me this particular part option is additional complexity.  Needs
justification, say arguments involving throughput, latency or client
complexity.
   
Perhaps cases exist where we never want to ask for everything.  Then 
the
give me everything query is useless, and the option should be
mandatory.
   
   For this _particular_ interface, I'm not sure whether libvirt will ever
   use an unfiltered query -
  
  If having the argument is useful for libvirt, then it's fine to have it.
  
  But I'd be very reluctant to buy any performance argument w/o real
  numbers to back them up.
 
 Me too. I think it's more convenience than performance.

Agreed.  I suggested filtering on a NIC for usability rather than
performance reasons.

QMP should be easy to use.  Requiring every client to fish for the right
NIC in a bunch of output that gets discarded is not convenient.

Stefan

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] libvirt supports Guest Panicked

2013-05-27 Thread Chen Fan
From: chenfan chen.fan.f...@cn.fujitsu.com

This patch implements qemu_driver supporting guest panicked, modified the 
'on_crash' default value to 'preserve'.
---
 examples/domain-events/events-c/event-test.c | 10 +++
 include/libvirt/libvirt.h.in | 16 +
 src/conf/domain_conf.c   | 14 ++--
 src/qemu/qemu_monitor.c  | 14 +++-
 src/qemu/qemu_monitor.h  |  5 ++
 src/qemu/qemu_monitor_json.c |  7 ++
 src/qemu/qemu_process.c  | 99 +++-
 tools/virsh-domain-monitor.c |  8 +++
 8 files changed, 166 insertions(+), 7 deletions(-)

diff --git a/examples/domain-events/events-c/event-test.c 
b/examples/domain-events/events-c/event-test.c
index eeff50f..1b425fb 100644
--- a/examples/domain-events/events-c/event-test.c
+++ b/examples/domain-events/events-c/event-test.c
@@ -93,6 +93,9 @@ const char *eventToString(int event) {
 case VIR_DOMAIN_EVENT_PMSUSPENDED:
 ret = PMSuspended;
 break;
+case VIR_DOMAIN_EVENT_CRASHED:
+ret = Crashed;
+break;
 }
 return ret;
 }
@@ -209,6 +212,13 @@ static const char *eventDetailToString(int event, int 
detail) {
 break;
 }
 break;
+case VIR_DOMAIN_EVENT_CRASHED:
+   switch ((virDomainEventCrashedDetailType) detail) {
+   case VIR_DOMAIN_EVENT_CRASHED_PANICKED:
+   ret = Panicked;
+   break;
+   }
+   break;
 }
 return ret;
 }
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 1804c93..56c6c5c 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -155,6 +155,7 @@ typedef enum {
 VIR_DOMAIN_RUNNING_SAVE_CANCELED = 7,   /* returned from failed save 
process */
 VIR_DOMAIN_RUNNING_WAKEUP = 8,  /* returned from pmsuspended due to
wakeup event */
+VIR_DOMAIN_RUNNING_CRASHED = 9,  /* resumed from crashed */
 
 #ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_RUNNING_LAST
@@ -180,6 +181,7 @@ typedef enum {
 VIR_DOMAIN_PAUSED_FROM_SNAPSHOT = 7, /* paused after restoring from 
snapshot */
 VIR_DOMAIN_PAUSED_SHUTTING_DOWN = 8, /* paused during shutdown process */
 VIR_DOMAIN_PAUSED_SNAPSHOT = 9,  /* paused while creating a snapshot */
+VIR_DOMAIN_PAUSED_GUEST_PANICKED = 10, /* paused due to a guest panicked 
event */
 
 #ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_PAUSED_LAST
@@ -189,6 +191,7 @@ typedef enum {
 typedef enum {
 VIR_DOMAIN_SHUTDOWN_UNKNOWN = 0,/* the reason is unknown */
 VIR_DOMAIN_SHUTDOWN_USER = 1,   /* shutting down on user request */
+VIR_DOMAIN_SHUTDOWN_CRASHED = 2,/* domain crashed */
 
 #ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_SHUTDOWN_LAST
@@ -212,6 +215,7 @@ typedef enum {
 
 typedef enum {
 VIR_DOMAIN_CRASHED_UNKNOWN = 0, /* crashed for unknown reason */
+VIR_DOMAIN_CRASHED_PANICKED = 1, /* domain panicked */
 
 #ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_CRASHED_LAST
@@ -3319,6 +3323,7 @@ typedef enum {
 VIR_DOMAIN_EVENT_STOPPED = 5,
 VIR_DOMAIN_EVENT_SHUTDOWN = 6,
 VIR_DOMAIN_EVENT_PMSUSPENDED = 7,
+VIR_DOMAIN_EVENT_CRASHED = 8,
 
 #ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_EVENT_LAST
@@ -3450,6 +3455,17 @@ typedef enum {
 #endif
 } virDomainEventPMSuspendedDetailType;
 
+/*
+ * Details about the 'crashed' lifecycle event
+ */
+typedef enum {
+VIR_DOMAIN_EVENT_CRASHED_PANICKED = 0, /* Guest was panicked */
+
+#ifdef VIR_ENUM_SENTINELS
+VIR_DOMAIN_EVENT_CRASHED_LAST
+#endif
+} virDomainEventCrashedDetailType;
+
 /**
  * virConnectDomainEventCallback:
  * @conn: virConnect connection
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index a9656af..3f0786e 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -642,7 +642,8 @@ VIR_ENUM_IMPL(virDomainRunningReason, 
VIR_DOMAIN_RUNNING_LAST,
   unpaused,
   migration canceled,
   save canceled,
-  wakeup)
+  wakeup,
+  from crashed)
 
 VIR_ENUM_IMPL(virDomainBlockedReason, VIR_DOMAIN_BLOCKED_LAST,
   unknown)
@@ -657,11 +658,13 @@ VIR_ENUM_IMPL(virDomainPausedReason, 
VIR_DOMAIN_PAUSED_LAST,
   watchdog,
   from snapshot,
   shutdown,
-  snapshot)
+  snapshot,
+  guest panicked)
 
 VIR_ENUM_IMPL(virDomainShutdownReason, VIR_DOMAIN_SHUTDOWN_LAST,
   unknown,
-  user)
+  user,
+  crashed)
 
 VIR_ENUM_IMPL(virDomainShutoffReason, VIR_DOMAIN_SHUTOFF_LAST,
   unknown,
@@ -674,7 +677,8 @@ VIR_ENUM_IMPL(virDomainShutoffReason, 
VIR_DOMAIN_SHUTOFF_LAST,
   from snapshot)
 
 VIR_ENUM_IMPL(virDomainCrashedReason, VIR_DOMAIN_CRASHED_LAST,
-  

Re: [libvirt] Issues with qemu-nbd over AF_UNIX and virDomainAttachDevice

2013-05-27 Thread Deepak C Shetty

On 05/27/2013 02:00 PM, Paolo Bonzini wrote:

Il 26/05/2013 15:54, Deepak C Shetty ha scritto:

3) In nbdxml, i tried removing port=.. it didn't give any error
with regards to port beign mandatory.. which si good.. but the attach
device still gives the same error as above

Is the `nbdxml` I am using for attachign a qemu-nbd exported drive,
correct ?

What version of QEMU is this?

Can you search the logs for the QMP commands related to the hotplug?


Do you mean starting my domain using -d option for qemu, which dumps the 
log in /tmp/qemu.log ?
I am using a VM started from virt-manager i don't see a way to pass 
-d to it from virt-manager... I can try using a hand-coded qemu cmdline tho'


I assume when i am using python (import libvirt) and/or virsh.. it uses 
qemu-kvm

qemu-kvm version is 1.0.1

Do i need to try with latest qemu git ?

On irc.. you asked me to check if cold-plug works.. but attach-device is 
for active domain only... am i missing something here ?


thanx,
deepak

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Issues with qemu-nbd over AF_UNIX and virDomainAttachDevice

2013-05-27 Thread Paolo Bonzini
Il 27/05/2013 12:46, Deepak C Shetty ha scritto:
 On 05/27/2013 02:00 PM, Paolo Bonzini wrote:
 Il 26/05/2013 15:54, Deepak C Shetty ha scritto:
 3) In nbdxml, i tried removing port=.. it didn't give any error
 with regards to port beign mandatory.. which si good.. but the attach
 device still gives the same error as above

 Is the `nbdxml` I am using for attachign a qemu-nbd exported drive,
 correct ?
 What version of QEMU is this?

 Can you search the logs for the QMP commands related to the hotplug?
 
 Do you mean starting my domain using -d option for qemu, which dumps the
 log in /tmp/qemu.log ?

I mean libvirt.log.  You can start libvirtd with these environment
variables:

LIBVIRT_DEBUG=debug LIBVIRT_LOG_OUTPUTS=1:file:/tmp/libvirt.log


 I am using a VM started from virt-manager i don't see a way to pass
 -d to it from virt-manager... I can try using a hand-coded qemu cmdline
 tho'
 
 I assume when i am using python (import libvirt) and/or virsh.. it uses
 qemu-kvm
 qemu-kvm version is 1.0.1

Ok.

 Do i need to try with latest qemu git ?

No, I don't think so, but you can try (1.4.0 or 1.5.0 will do).

 On irc.. you asked me to check if cold-plug works.. but attach-device is
 for active domain only... am i missing something here ?

You can try putting the disk item and start the VM.  In this case the
logs in /var/log/libvirt/qemu will be helpful, because they contain the
command-line that is used to start QEMU.

Paolo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] libvirt supports Guest Panicked

2013-05-27 Thread Viktor Mihajlovski

On 05/27/2013 10:41 AM, Chen Fan wrote:

From: chenfan chen.fan.f...@cn.fujitsu.com

This patch implements qemu_driver supporting guest panicked, modified the 
'on_crash' default value to 'preserve'.

It's not good to change the default behavior, if someone wants
preserve, make them specify it in the XML.

---
  examples/domain-events/events-c/event-test.c | 10 +++
  include/libvirt/libvirt.h.in | 16 +
  src/conf/domain_conf.c   | 14 ++--
  src/qemu/qemu_monitor.c  | 14 +++-
  src/qemu/qemu_monitor.h  |  5 ++
  src/qemu/qemu_monitor_json.c |  7 ++
  src/qemu/qemu_process.c  | 99 +++-
  tools/virsh-domain-monitor.c |  8 +++
  8 files changed, 166 insertions(+), 7 deletions(-)


not an issue, since it's a fairly small patch, but one could imagine to
split it up into a 3-part series: general libvirt, qemu and virsh.

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
@@ -10943,7 +10947,7 @@ virDomainDefParseXML(xmlDocPtr xml,
  if (virDomainEventActionParseXML(ctxt, on_crash,
   string(./on_crash[1]),
   def-onCrash,
- VIR_DOMAIN_LIFECYCLE_CRASH_DESTROY,
+ VIR_DOMAIN_LIFECYCLE_CRASH_PRESERVE,
   virDomainLifecycleCrashTypeFromString)  
0)
  goto error;


see above, it's not good to change defaults, plus make check fails
with this hunk applied.

Other than that looks good to me (Tested on s390x with destroy, reboot
and preserve). Looking forward to see this in libvirt.

--

Mit freundlichen Grüßen/Kind Regards
   Viktor Mihajlovski

IBM Deutschland Research  Development GmbH
Vorsitzender des Aufsichtsrats: Martina Köderitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] libvirt supports Guest Panicked

2013-05-27 Thread Viktor Mihajlovski

On 05/27/2013 10:41 AM, Chen Fan wrote:

From: chenfan chen.fan.f...@cn.fujitsu.com
@@ -3185,6 +3194,9 @@ int qemuMonitorVMStatusToPausedReason(const char *status)
  case QEMU_MONITOR_VM_STATUS_WATCHDOG:
  return VIR_DOMAIN_PAUSED_WATCHDOG;

+case QEMU_MONITOR_VM_STATUS_GUEST_PANICKED:
+return VIR_DOMAIN_PAUSED_GUEST_PANICKED;
+
  /* unreachable from this point on */
  case QEMU_MONITOR_VM_STATUS_LAST:
  ;

I just have crashed the guest while libvirtd wasn't running,
after a restart, virsh domstate --reason always reports
paused (guest panicked) while I would expect crashed (guest panicked).
Either QEMU is reporting a bogus state or the state computation
is flawed somewhere...

--

Mit freundlichen Grüßen/Kind Regards
   Viktor Mihajlovski

IBM Deutschland Research  Development GmbH
Vorsitzender des Aufsichtsrats: Martina Köderitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] fix change-media bug on disk block type and support volume type

2013-05-27 Thread Guannan Ren

Resolves:https://bugzilla.redhat.com/show_bug.cgi?id=923053
When cdrom is block type, the virsh change-media failed to insert
source info because virsh uses source block='/dev/sdb'/ while
the correct name of the attribute for block disks is dev.

Correct XML:
disk type='block' device='cdrom'
  driver name='qemu' type='raw'/
  source dev='/dev/sdb'/
  target dev='vdb' bus='virtio'/
  readonly/
/disk

And, this patch supports cdrom with volume type for change-media command
For example:

'/var/lib/libvirt/images/boot.iso' is a volume path of 'boot.iso' volume
on 'default' pool and the cdrom device has no source.

Virsh command:
 virsh change-media rhel6qcow2 vdb /var/lib/libvirt/images/boot.iso --insert

The updated disk XML:
disk type='volume' device='cdrom'
  driver name='qemu' type='raw'/
  source pool='default' volume='boot.iso'/
  target dev='vdb' bus='virtio'/
  readonly/
/disk

Guannan Ren(3)
 [PATCH 1/3] qemu: throw original error when failing to lookup pool or volume
 [PATCH 2/3] qemu: support updating pool and volume info when disk is volume 
type
 [PATCH 3/3] virsh: fix change-media bug on disk block type and support volume 
type

 src/conf/domain_conf.c   |  2 +-
 src/conf/domain_conf.h   |  1 +
 src/libvirt_private.syms |  1 +
 src/qemu/qemu_conf.c | 11 ++-
 src/qemu/qemu_driver.c   |  5 +
 tools/virsh-domain.c | 57 
+++--
 6 files changed, 69 insertions(+), 8 deletions(-)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 1/3] qemu: throw original error when failing to lookup pool or volume

2013-05-27 Thread Guannan Ren
---
 src/qemu/qemu_conf.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 094f9f7..d616d7a 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -1466,6 +1466,7 @@ qemuTranslateDiskSourcePool(virConnectPtr conn,
 virStoragePoolPtr pool = NULL;
 virStorageVolPtr vol = NULL;
 virStorageVolInfo info;
+virErrorPtr orig_err = NULL;
 int ret = -1;
 
 if (def-type != VIR_DOMAIN_DISK_TYPE_VOLUME)
@@ -1475,7 +1476,7 @@ qemuTranslateDiskSourcePool(virConnectPtr conn,
 return 0;
 
 if (!(pool = virStoragePoolLookupByName(conn, def-srcpool-pool)))
-return -1;
+goto cleanup;
 
 if (!(vol = virStorageVolLookupByName(pool, def-srcpool-volume)))
 goto cleanup;
@@ -1506,7 +1507,15 @@ qemuTranslateDiskSourcePool(virConnectPtr conn,
 def-srcpool-voltype = info.type;
 ret = 0;
 cleanup:
+if (ret  0  !orig_err)
+orig_err = virSaveLastError();
+
 virStoragePoolFree(pool);
 virStorageVolFree(vol);
+
+if (orig_err) {
+virSetError(orig_err);
+virFreeError(orig_err);
+}
 return ret;
 }
-- 
1.8.1.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 3/3] virsh: fix change-media bug on disk block type and support volume type

2013-05-27 Thread Guannan Ren
Resolves:https://bugzilla.redhat.com/show_bug.cgi?id=923053
When cdrom is block type, the virsh change-media failed to insert
source info because virsh uses source block='/dev/sdb'/ while
the correct name of the attribute for block disks is dev.

Correct XML:
disk type='block' device='cdrom'
  driver name='qemu' type='raw'/
  source dev='/dev/sdb'/
  target dev='vdb' bus='virtio'/
  readonly/
/disk

And, this patch supports cdrom with volume type for change-media command
For example:
'/var/lib/libvirt/images/boot.iso' is a volume path of 'boot.iso' volume
on 'default' pool

Virsh command:
 virsh change-media rhel6qcow2 vdb /var/lib/libvirt/images/boot.iso

The updated disk XML:
disk type='volume' device='cdrom'
  driver name='qemu' type='raw'/
  source pool='default' volume='boot.iso'/
  target dev='vdb' bus='virtio'/
  readonly/
/disk
---
 tools/virsh-domain.c | 57 ++--
 1 file changed, 51 insertions(+), 6 deletions(-)

diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 0402aef..e096270 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -9594,13 +9594,58 @@ typedef enum {
 VSH_PREPARE_DISK_XML_UPDATE,
 } vshPrepareDiskXMLType;
 
+static xmlNodePtr
+vshPrepareDiskXMLNode(virConnectPtr conn,
+  const char *disk_type,
+  const char *source)
+{
+xmlNodePtr node;
+node = xmlNewNode(NULL, BAD_CAST source);
+
+if (STREQ(disk_type, volume)) {
+virStoragePoolPtr pool;
+virStorageVolPtr vol;
+vol = virStorageVolLookupByPath(conn, source);
+
+if (vol == NULL) {
+vshError(NULL, _(failed to get vol '%s'), source);
+goto error;
+}
+
+pool = virStoragePoolLookupByVolume(vol);
+if (pool == NULL) {
+vshError(NULL, %s, _(failed to get parent pool));
+virStorageVolFree(vol);
+goto error;
+}
+
+xmlNewProp(node, BAD_CAST pool, BAD_CAST 
virStoragePoolGetName(pool));
+xmlNewProp(node, BAD_CAST volume, BAD_CAST 
virStorageVolGetName(vol));
+
+virStoragePoolFree(pool);
+virStorageVolFree(vol);
+
+} else if (STREQ(disk_type, block)) {
+xmlNewProp(node, BAD_CAST dev,BAD_CAST source);
+} else {
+xmlNewProp(node, BAD_CAST disk_type, BAD_CAST source);
+}
+
+return node;
+
+error:
+xmlFreeNode(node);
+return NULL;
+}
+
 /* Helper function to prepare disk XML. Could be used for disk
  * detaching, media changing(ejecting, inserting, updating)
  * for changeable disk. Returns the processed XML as string on
  * success, or NULL on failure. Caller must free the result.
  */
 static char *
-vshPrepareDiskXML(xmlNodePtr disk_node,
+vshPrepareDiskXML(vshControl *ctl,
+  xmlNodePtr disk_node,
   const char *source,
   const char *path,
   int type)
@@ -9646,9 +9691,9 @@ vshPrepareDiskXML(xmlNodePtr disk_node,
 }
 
 if (source) {
-new_node = xmlNewNode(NULL, BAD_CAST source);
-xmlNewProp(new_node, (const xmlChar *)disk_type,
-   (const xmlChar *)source);
+new_node = vshPrepareDiskXMLNode(ctl-conn, disk_type, source);
+if (new_node == NULL)
+goto error;
 xmlAddChild(disk_node, new_node);
 } else if (type == VSH_PREPARE_DISK_XML_INSERT) {
 vshError(NULL, _(No source is specified for inserting 
media));
@@ -9793,7 +9838,7 @@ cmdDetachDisk(vshControl *ctl, const vshCmd *cmd)
 if (!(disk_node = vshFindDisk(doc, target, VSH_FIND_DISK_NORMAL)))
 goto cleanup;
 
-if (!(disk_xml = vshPrepareDiskXML(disk_node, NULL, NULL,
+if (!(disk_xml = vshPrepareDiskXML(ctl, disk_node, NULL, NULL,
VSH_PREPARE_DISK_XML_NONE)))
 goto cleanup;
 
@@ -10012,7 +10057,7 @@ cmdChangeMedia(vshControl *ctl, const vshCmd *cmd)
 if (!(disk_node = vshFindDisk(doc, path, VSH_FIND_DISK_CHANGEABLE)))
 goto cleanup;
 
-if (!(disk_xml = vshPrepareDiskXML(disk_node, source, path, prepare_type)))
+if (!(disk_xml = vshPrepareDiskXML(ctl, disk_node, source, path, 
prepare_type)))
 goto cleanup;
 
 if (virDomainUpdateDeviceFlags(dom, disk_xml, flags) != 0) {
-- 
1.8.1.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 2/3] qemu: support updating pool and volume info when disk is volume type

2013-05-27 Thread Guannan Ren
---
 src/conf/domain_conf.c   | 2 +-
 src/conf/domain_conf.h   | 1 +
 src/libvirt_private.syms | 1 +
 src/qemu/qemu_driver.c   | 5 +
 4 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index a9656af..776c1ed 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -1152,7 +1152,7 @@ void virDomainLeaseDefFree(virDomainLeaseDefPtr def)
 VIR_FREE(def);
 }
 
-static void
+void
 virDomainDiskSourcePoolDefFree(virDomainDiskSourcePoolDefPtr def)
 {
 if (!def)
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 3a71d6c..ce8e744 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -2125,6 +2125,7 @@ bool virDomainObjTaint(virDomainObjPtr obj,
 void virDomainResourceDefFree(virDomainResourceDefPtr resource);
 void virDomainGraphicsDefFree(virDomainGraphicsDefPtr def);
 void virDomainInputDefFree(virDomainInputDefPtr def);
+void virDomainDiskSourcePoolDefFree(virDomainDiskSourcePoolDefPtr def);
 void virDomainDiskDefFree(virDomainDiskDefPtr def);
 void virDomainLeaseDefFree(virDomainLeaseDefPtr def);
 void virDomainDiskHostDefFree(virDomainDiskHostDefPtr def);
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 9d5f74b..1e7e7e2 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -161,6 +161,7 @@ virDomainDiskProtocolTransportTypeToString;
 virDomainDiskProtocolTypeToString;
 virDomainDiskRemove;
 virDomainDiskRemoveByName;
+virDomainDiskSourcePoolDefFree;
 virDomainDiskTypeFromString;
 virDomainDiskTypeToString;
 virDomainEmulatorPinAdd;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 4a76f14..7ee1c47 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -6322,6 +6322,11 @@ qemuDomainUpdateDeviceConfig(virQEMUCapsPtr qemuCaps,
 }
 if (disk-format)
 orig-format = disk-format;
+if (disk-srcpool) {
+virDomainDiskSourcePoolDefFree(orig-srcpool);
+orig-srcpool = disk-srcpool;
+disk-srcpool = NULL;
+}
 disk-src = NULL;
 break;
 
-- 
1.8.1.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Issues with qemu-nbd over AF_UNIX and virDomainAttachDevice

2013-05-27 Thread Deepak C Shetty

On 05/27/2013 04:41 PM, Paolo Bonzini wrote:

Il 27/05/2013 12:46, Deepak C Shetty ha scritto:

On 05/27/2013 02:00 PM, Paolo Bonzini wrote:

Il 26/05/2013 15:54, Deepak C Shetty ha scritto:

3) In nbdxml, i tried removing port=.. it didn't give any error
with regards to port beign mandatory.. which si good.. but the attach
device still gives the same error as above

Is the `nbdxml` I am using for attachign a qemu-nbd exported drive,
correct ?

What version of QEMU is this?

Can you search the logs for the QMP commands related to the hotplug?

Do you mean starting my domain using -d option for qemu, which dumps the
log in /tmp/qemu.log ?

I mean libvirt.log.  You can start libvirtd with these environment
variables:

LIBVIRT_DEBUG=debug LIBVIRT_LOG_OUTPUTS=1:file:/tmp/libvirt.log


Thanks, this helped. Here is the summary

With latest libvirtd (1.0.5 built from git)...

disk type='network'
  driver name=qemu type=qcow2/
  source protocol=nbd
host name=deepakcs-lx transport=unix socket=/tmp/mysock2 /
  /source
  target dev=vdc bus=virtio /
/disk

 dom.attachDevice(nbdxml)
libvirt: QEMU Driver error : operation failed: open disk image file failed
Traceback (most recent call last):
  File stdin, line 1, in module
  File /home/dpkshetty/work/newlibvirt2/libvirt/python/libvirt.py, 
line 419, in attachDevice
if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', 
dom=self)

libvirt.libvirtError: operation failed: open disk image file failed

* from /tmp/libvirt.log
QEMU_MONITOR_IO_WRITE: mon=0x7f8308003d40 
buf={execute:human-monitor-command,arguments:{command-line:drive_add 
dummy 
file=nbd:unix:/tmp/mysock2,if=none,id=drive-virtio-disk2,format=qcow2},id:libvirt-390}


QEMU_MONITOR_RECV_REPLY: mon=0x7f8308003d40 reply={return: could not 
open disk image nbd:unix:/tmp/mysock2: Invalid argument\r\n, id: 
libvirt-390}


On command line...
qemu-kvm -drive file=nbd:unix:/tmp/mysock2,format=qcow2
qemu-kvm: -drive file=nbd:unix:/tmp/mysock2,format=qcow2: could not open 
disk image nbd:unix:/tmp/mysock2: Invalid argument


but

qemu-kvm -drive file=nbd:unix:/tmp/mysock2 - works!

So looks like format=qcow2 is causing issues!!!

Then tried...
(removing the driver... altoghether)

disk type='network'
  source protocol=nbd
host name=deepakcs-lx transport=unix socket=/tmp/mysock2 /
  /source
  target dev=vdc bus=virtio /
/disk

 dom.attachDevice(nbdxml)
0

 dom.detachDevice(nbdxml)
0



* from /tmp/libvirt.log
QEMU_MONITOR_IO_WRITE: mon=0x7f8308003d40 
buf={execute:human-monitor-command,arguments:{command-line:drive_add 
dummy 
file=nbd:unix:/tmp/mysock2,if=none,id=drive-virtio-disk2},id:libvirt-867}


This works and I was able to successfully add/remove a disk exported via 
qemu-nbd to a running VM !







I am using a VM started from virt-manager i don't see a way to pass
-d to it from virt-manager... I can try using a hand-coded qemu cmdline
tho'

I assume when i am using python (import libvirt) and/or virsh.. it uses
qemu-kvm
qemu-kvm version is 1.0.1

Ok.


Do i need to try with latest qemu git ?

No, I don't think so, but you can try (1.4.0 or 1.5.0 will do).


I was using qemu-kvm version 1.0.1... for all of the ^^ above




On irc.. you asked me to check if cold-plug works.. but attach-device is
for active domain only... am i missing something here ?

You can try putting the disk item and start the VM.  In this case the
logs in /var/log/libvirt/qemu will be helpful, because they contain the
command-line that is used to start QEMU.


Tried putting the above nbdxml usign virsh edit domname as an addnl 
disk and domain booted fine

It throws the same error if u add format=qcow2 under driver...

So looks like the right way to use NBD is *not* to specify format and 
let QEMU autosense it ?


thanx,
deepak




Paolo




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Entering freeze for libvirt-1.0.6

2013-05-27 Thread Daniel Veillard
  As planned, I just tagged the rc1 release in git and pushed
tarball and rpms to the FTP repository:
  ftp://libvirt.org/

So focuse should be given to bug fixes this week, and if everything
goes well I will make the release on Monday 3, a week from now.
I gave a bit of testing to the rc1 rpms, seems to work okay for KVM on
Fedora but further testing would  be a good idea, on other
distro/platforms and also other hypervisor, especially with VirtualBox
as the driver was mdified to now rely on the daemon for support of
that hypervisor.

  thanks in advance,

Daniel

-- 
Daniel Veillard  | Open Source and Standards, Red Hat
veill...@redhat.com  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Issues with qemu-nbd over AF_UNIX and virDomainAttachDevice

2013-05-27 Thread Paolo Bonzini
Il 27/05/2013 15:41, Deepak C Shetty ha scritto:

 
 Tried putting the above nbdxml usign virsh edit domname as an addnl
 disk and domain booted fine
 It throws the same error if u add format=qcow2 under driver...
 
 So looks like the right way to use NBD is *not* to specify format and
 let QEMU autosense it ?

No, the format for NBD will in general be raw.  qemu-nbd takes care of
qcow2.  This is because NBD lets you make the VM description agnostic of
the format, which in general is desirable.  Hence, raw is what you
should specify most of the time; leaving format probing to QEMU will
usually work but it is highly insecure.

Older versions of qemu-nbd will always autosense the format; but because
autosensing is insecure, newer versions (1.4.1 and newer) have a -f
option to specify the format manually.

In the newer versions that have -f, the same option can be used if you
want to specify the format option in the device XML; to do so, specify
-f raw and add back the driver element to the XML.  This however is
generally not what you want to do.

Paolo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 00/11] vCPU hotplug mega-series

2013-05-27 Thread Peter Krempa
This patch series consists of multiple parts:
-patch 1,2: Two trivial cleanups
-patch 3: Improve and refactor vCPU data parsing
-patch 4: Add agent monitor helpers for cpu-hotplug stuff
-patch 5,6,7: Implement universal guest vCPU mapping function
-patch 8: Implement new qemu monitor command for cpu hotplug
-patch 9,10,11: Implement agent based cpu state manipulation API/command

Tip: The new virsh vcpumap guest --active command may be used that the
agent actually offlined the vCPU in the guest.

Peter Krempa (11):
  virsh-domain-monitor: Remove ATTRIBUTE_UNUSED from a argument
  qemu: Use bool instead of int in qemuMonitorSetCPU APIs
  qemu: Extract more information about vCPUs and threads
  qemu_agent: Introduce helpers for agent based CPU hot(un)plug
  lib: Add API to map virtual cpus of a guest
  virsh-domain-monitor: Implement command to map guest vCPUs
  qemu: Implement virDomainGetVCPUMap for the qemu driver
  qemu: Implement new QMP command for cpu hotplug
  lib: Add API to modify vCPU state from the guest using the guest agent
  virsh-domain: Implement command for virDomainSetGuestVcpu
  qemu: Implement virDomainSetGuetVcpu in qemu driver

 daemon/remote.c |  54 
 include/libvirt/libvirt.h.in|  25 
 python/generator.py |   1 +
 python/libvirt-override-api.xml |   7 +
 python/libvirt-override.c   |  66 ++
 src/driver.h|  14 ++
 src/libvirt.c   | 129 +++
 src/libvirt_public.syms |   7 +
 src/qemu/qemu_agent.c   | 148 +
 src/qemu/qemu_agent.h   |  12 ++
 src/qemu/qemu_driver.c  | 279 +---
 src/qemu/qemu_monitor.c |  11 +-
 src/qemu/qemu_monitor.h |  13 +-
 src/qemu/qemu_monitor_json.c|  86 ++---
 src/qemu/qemu_monitor_json.h|   4 +-
 src/qemu/qemu_monitor_text.c|  94 --
 src/qemu/qemu_monitor_text.h|   4 +-
 src/qemu/qemu_process.c |  63 ++---
 src/remote/remote_driver.c  |  48 +++
 src/remote/remote_protocol.x|  31 -
 src/remote_protocol-structs |  20 +++
 tools/virsh-domain-monitor.c| 112 +++-
 tools/virsh-domain.c|  77 +++
 tools/virsh.pod |  29 +
 24 files changed, 1224 insertions(+), 110 deletions(-)

-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 05/11] lib: Add API to map virtual cpus of a guest

2013-05-27 Thread Peter Krempa
QEMU recently added support for cpu hotplug upstream that will support
plugging arbitrary cpus. Additionally guest-agent-based cpu state
modification from the guest point of view was added recently.

This API will help monitoring the state of vCPUs using the two
apporoaches as a support infrastructure for the modification APIs.
---
 daemon/remote.c | 54 ++
 include/libvirt/libvirt.h.in| 21 
 python/generator.py |  1 +
 python/libvirt-override-api.xml |  7 
 python/libvirt-override.c   | 66 
 src/driver.h|  6 
 src/libvirt.c   | 74 +
 src/libvirt_public.syms |  6 
 src/remote/remote_driver.c  | 47 ++
 src/remote/remote_protocol.x| 18 +-
 src/remote_protocol-structs | 13 
 11 files changed, 312 insertions(+), 1 deletion(-)

diff --git a/daemon/remote.c b/daemon/remote.c
index 0e253bf..bb0640a 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -4583,6 +4583,60 @@ cleanup:
 return rv;
 }

+
+static int
+remoteDispatchDomainGetVcpuMap(virNetServerPtr server ATTRIBUTE_UNUSED,
+   virNetServerClientPtr client ATTRIBUTE_UNUSED,
+   virNetMessagePtr msg ATTRIBUTE_UNUSED,
+   virNetMessageErrorPtr rerr,
+   remote_domain_get_vcpu_map_args *args,
+   remote_domain_get_vcpu_map_ret *ret)
+{
+unsigned char *cpumap = NULL;
+unsigned int flags;
+int cpunum;
+int rv = -1;
+virDomainPtr dom = NULL;
+struct daemonClientPrivate *priv =
+virNetServerClientGetPrivateData(client);
+
+if (!priv-conn) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s, _(connection not open));
+goto cleanup;
+}
+
+if (!(dom = get_nonnull_domain(priv-conn, args-dom)))
+goto cleanup;
+
+flags = args-flags;
+
+cpunum = virDomainGetVCPUMap(dom,
+ args-need_map ? cpumap : NULL,
+ flags);
+if (cpunum  0)
+goto cleanup;
+
+/* 'serialize' return cpumap */
+if (args-need_map) {
+ret-cpumap.cpumap_len = VIR_CPU_MAPLEN(cpunum);
+ret-cpumap.cpumap_val = (char *) cpumap;
+cpumap = NULL;
+}
+
+ret-ret = cpunum;
+
+rv = 0;
+
+cleanup:
+if (rv  0)
+virNetMessageSaveError(rerr);
+if (dom)
+virDomainFree(dom);
+VIR_FREE(cpumap);
+return rv;
+}
+
+
 static int
 lxcDispatchDomainOpenNamespace(virNetServerPtr server ATTRIBUTE_UNUSED,
virNetServerClientPtr client ATTRIBUTE_UNUSED,
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 1804c93..46d499c 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -4858,6 +4858,27 @@ int virNWFilterGetUUIDString 
(virNWFilterPtr nwfilter,
   char *buf);
 char *  virNWFilterGetXMLDesc(virNWFilterPtr nwfilter,
   unsigned int flags);
+
+/**
+ * virDomainGetVCPUMapFlags
+ *
+ * Since 1.0.6
+ */
+typedef enum {
+VIR_DOMAIN_VCPU_MAP_HYPERVISOR = (1  0), /* request data from hypervisor 
*/
+VIR_DOMAIN_VCPU_MAP_AGENT = (1  1), /* request data from guest agent */
+
+VIR_DOMAIN_VCPU_MAP_POSSIBLE = (1  2), /* map all possible vcpus */
+VIR_DOMAIN_VCPU_MAP_ONLINE = (1  3), /* map all online vcpus */
+VIR_DOMAIN_VCPU_MAP_OFFLINE = (1  4), /* map all offline vcpus */
+VIR_DOMAIN_VCPU_MAP_OFFLINABLE = (1  5), /* map all vcpus that can be 
offlined */
+VIR_DOMAIN_VCPU_MAP_ACTIVE = (1  6), /* map cpus that are in use by the 
guest */
+} virDomainGetVCPUMapFlags;
+
+int virDomainGetVCPUMap(virDomainPtr dom,
+unsigned char **cpumap,
+unsigned int flags);
+
 /**
  * virDomainConsoleFlags
  *
diff --git a/python/generator.py b/python/generator.py
index 8c380bb..4884c29 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -458,6 +458,7 @@ skip_impl = (
 'virNodeGetMemoryParameters',
 'virNodeSetMemoryParameters',
 'virNodeGetCPUMap',
+'virDomainGetVCPUMap',
 )

 lxc_skip_impl = (
diff --git a/python/libvirt-override-api.xml b/python/libvirt-override-api.xml
index 155ab36..7725800 100644
--- a/python/libvirt-override-api.xml
+++ b/python/libvirt-override-api.xml
@@ -584,5 +584,12 @@
   arg name='conn' type='virConnectPtr' info='pointer to the hypervisor 
connection'/
   arg name='flags' type='int' info='unused, pass 0'/
 /function
+function name='virDomainGetVCPUMap' file='python'
+  infoGet node CPU information/info
+  return type='char *' info='(cpunum, cpumap) on success, None on error'/
+

[libvirt] [PATCH 07/11] qemu: Implement virDomainGetVCPUMap for the qemu driver

2013-05-27 Thread Peter Krempa
Use the agent cpu state code and the upgraded hypervisor vcpu state
retrieval code to implement virDomainGetVCPUMap() api.
---
 src/qemu/qemu_driver.c | 179 +
 1 file changed, 179 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 0041d8f..70acdbb 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15052,6 +15052,184 @@ qemuNodeSuspendForDuration(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return nodeSuspendForDuration(target, duration, flags);
 }

+#define MATCH(FLAG) (flags  (FLAG))
+static int
+qemuDomainGetVCPUMap(virDomainPtr dom,
+ unsigned char **cpumap,
+ unsigned int flags)
+{
+virQEMUDriverPtr driver = dom-conn-privateData;
+virDomainObjPtr vm;
+qemuDomainObjPrivatePtr priv;
+
+qemuAgentCPUInfoPtr agentinfo = NULL;
+qemuMonitorCPUInfoPtr vcpuinfo = NULL;
+int ninfo = -1;
+
+virBitmapPtr cpus = NULL;
+int i;
+int ret = -1;
+int dummy;
+
+virCheckFlags(VIR_DOMAIN_VCPU_MAP_HYPERVISOR |
+  VIR_DOMAIN_VCPU_MAP_AGENT |
+  VIR_DOMAIN_VCPU_MAP_POSSIBLE |
+  VIR_DOMAIN_VCPU_MAP_ONLINE |
+  VIR_DOMAIN_VCPU_MAP_OFFLINE |
+  VIR_DOMAIN_VCPU_MAP_OFFLINABLE |
+  VIR_DOMAIN_VCPU_MAP_ACTIVE, -1);
+
+
+if (!(vm = qemuDomObjFromDomain(dom)))
+return -1;
+
+priv = vm-privateData;
+
+/* request data from the guest */
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
+goto cleanup;
+
+if (!virDomainObjIsActive(vm)) {
+virReportError(VIR_ERR_OPERATION_INVALID, %s,
+   _(domain is not running));
+goto endjob;
+}
+
+
+if (MATCH(VIR_DOMAIN_VCPU_MAP_AGENT)) {
+if (!priv-agent) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, %s,
+   _(guest agent is not configured));
+goto endjob;
+}
+qemuDomainObjEnterAgent(vm);
+ninfo = qemuAgentGetVCPUs(priv-agent, agentinfo);
+qemuDomainObjExitAgent(vm);
+} else {
+qemuDomainObjEnterMonitor(driver, vm);
+ninfo = qemuMonitorGetCPUInfo(priv-mon, vcpuinfo);
+qemuDomainObjExitMonitor(driver, vm);
+}
+
+endjob:
+if (qemuDomainObjEndJob(driver, vm) == 0)
+vm = NULL;
+
+if (ninfo  0)
+goto cleanup;
+
+
+if (MATCH(VIR_DOMAIN_VCPU_MAP_AGENT)) {
+unsigned int maxcpu = 0;
+
+if (MATCH(VIR_DOMAIN_VCPU_MAP_ACTIVE)) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, %s,
+   _(qemu guest agent doesn't report active vCPUs ));
+goto cleanup;
+}
+
+/* count cpus */
+for (i = 0; i  ninfo; i++) {
+if (agentinfo[i].id  maxcpu)
+maxcpu = agentinfo[i].id;
+}
+
+/* allocate the returned array, vCPUs are indexed from 0 */
+if (!(cpus = virBitmapNew(maxcpu + 1))) {
+virReportOOMError();
+goto cleanup;
+}
+
+/* VIR_DOMAIN_VCPU_MAP_POSSIBLE */
+for (i = 0; i  ninfo; i++)
+ignore_value(virBitmapSetBit(cpus, agentinfo[i].id));
+
+if (MATCH(VIR_DOMAIN_VCPU_MAP_ONLINE)) {
+ for (i = 0; i  ninfo; i++) {
+ if (!agentinfo[i].online)
+ignore_value(virBitmapClearBit(cpus, agentinfo[i].id));
+ }
+}
+
+if (MATCH(VIR_DOMAIN_VCPU_MAP_OFFLINE)) {
+ for (i = 0; i  ninfo; i++) {
+ if (agentinfo[i].online)
+ignore_value(virBitmapClearBit(cpus, agentinfo[i].id));
+ }
+}
+
+if (MATCH(VIR_DOMAIN_VCPU_MAP_OFFLINABLE)) {
+ for (i = 0; i  ninfo; i++) {
+ if (!agentinfo[i].offlinable)
+ignore_value(virBitmapClearBit(cpus, agentinfo[i].id));
+ }
+}
+} else {
+if (MATCH(VIR_DOMAIN_VCPU_MAP_OFFLINABLE)) {
+virReportError(VIR_ERR_INVALID_ARG, %s,
+   _(qemu driver doesn't support reporting of 
+ offlinable vCPUs of the hypervisor));
+goto cleanup;
+}
+
+/* hypervisor cpu stats */
+if (!(cpus = virBitmapNew(vm-def-maxvcpus))) {
+virReportOOMError();
+goto cleanup;
+}
+
+/* map active cpus */
+if (MATCH(VIR_DOMAIN_VCPU_MAP_ACTIVE)) {
+/* offline vcpus can't be active */
+if (MATCH(VIR_DOMAIN_VCPU_MAP_OFFLINE))
+goto done;
+
+for (i = 0; i  ninfo; i++) {
+if (vcpuinfo[i].active)
+ignore_value(virBitmapSetBit(cpus, vcpuinfo[i].id));
+}
+
+goto done;
+}
+
+/* for native hotplug, all configured vCPUs are possible for hotplug */
+if 

[libvirt] [PATCH 01/11] virsh-domain-monitor: Remove ATTRIBUTE_UNUSED from a argument

2013-05-27 Thread Peter Krempa
The cmd argument in cmdList is now used. Unmark it as unused.
---
 tools/virsh-domain-monitor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 5ed89d1..936fa8e 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -1689,7 +1689,7 @@ static const vshCmdOptDef opts_list[] = {
 if (vshCommandOptBool(cmd, NAME))   \
 flags |= (FLAG)
 static bool
-cmdList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED)
+cmdList(vshControl *ctl, const vshCmd *cmd)
 {
 bool managed = vshCommandOptBool(cmd, managed-save);
 bool optTitle = vshCommandOptBool(cmd, title);
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 08/11] qemu: Implement new QMP command for cpu hotplug

2013-05-27 Thread Peter Krempa
This patch implements support for the cpu-add QMP command that plugs
CPUs into a live guest. The cpu-add command was introduced in QEMU
1.5. For the hotplug to work machine type pc-i440fx-1.5 is required.
---
 src/qemu/qemu_monitor_json.c | 37 +++--
 1 file changed, 35 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 4a69fec..a415732 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2089,9 +2089,42 @@ cleanup:
 int qemuMonitorJSONSetCPU(qemuMonitorPtr mon,
   int cpu, bool online)
 {
-/* XXX Update to use QMP, if QMP ever adds support for cpu hotplug */
+int ret = -1;
+virJSONValuePtr cmd = NULL;
+virJSONValuePtr reply = NULL;
+
+if (online) {
+cmd = qemuMonitorJSONMakeCommand(cpu-add,
+ i:id, cpu,
+ NULL);
+} else {
+/* offlining is not yet implemented in qmp */
+goto fallback;
+}
+if (!cmd)
+goto cleanup;
+
+if ((ret = qemuMonitorJSONCommand(mon, cmd, reply))  0)
+goto cleanup;
+
+if (qemuMonitorJSONHasError(reply, CommandNotFound))
+goto fallback;
+else
+ret = qemuMonitorJSONCheckError(cmd, reply);
+
+/* this function has non-standard return values, so adapt it */
+if (ret == 0)
+ret = 1;
+
+cleanup:
+virJSONValueFree(cmd);
+virJSONValueFree(reply);
+return ret;
+
+fallback:
 VIR_DEBUG(no QMP support for cpu_set, trying HMP);
-return qemuMonitorTextSetCPU(mon, cpu, online);
+ret = qemuMonitorTextSetCPU(mon, cpu, online);
+goto cleanup;
 }


-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 03/11] qemu: Extract more information about vCPUs and threads

2013-05-27 Thread Peter Krempa
The qemu monitor provides more information about vCPUs of a guest than
we needed currently. This patch upgrades the extraction function to
easily extract new data about the vCPUs and fixes code to cope with the
new structure. The information extracted here will be later used for
mapping of vCPUs of a guest.

This patch also refactors the function used to parse data from the text
monitor.
---
 src/qemu/qemu_driver.c   | 31 ---
 src/qemu/qemu_monitor.c  |  9 +++--
 src/qemu/qemu_monitor.h  | 11 +-
 src/qemu/qemu_monitor_json.c | 47 +-
 src/qemu/qemu_monitor_json.h |  2 +-
 src/qemu/qemu_monitor_text.c | 92 +---
 src/qemu/qemu_monitor_text.h |  2 +-
 src/qemu/qemu_process.c  | 63 --
 8 files changed, 159 insertions(+), 98 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 4a4f8d3..0041d8f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -3522,8 +3522,8 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
 int ret = -1;
 int oldvcpus = vm-def-vcpus;
 int vcpus = oldvcpus;
-pid_t *cpupids = NULL;
-int ncpupids;
+qemuMonitorCPUInfoPtr cpuinfo = NULL;
+int ncpuinfo;
 virCgroupPtr cgroup_vcpu = NULL;

 qemuDomainObjEnterMonitor(driver, vm);
@@ -3564,13 +3564,13 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr 
driver,
  * or don't have the info cpus command (and they don't support multiple
  * CPUs anyways), so errors in the re-detection will not be treated
  * fatal */
-if ((ncpupids = qemuMonitorGetCPUInfo(priv-mon, cpupids)) = 0) {
+if ((ncpuinfo = qemuMonitorGetCPUInfo(priv-mon, cpuinfo)) = 0) {
 virResetLastError();
 goto cleanup;
 }

 /* check if hotplug has failed */
-if (vcpus  oldvcpus  ncpupids == oldvcpus) {
+if (vcpus  oldvcpus  ncpuinfo == oldvcpus) {
 virReportError(VIR_ERR_OPERATION_UNSUPPORTED, %s,
_(qemu didn't unplug the vCPUs properly));
 vcpus = oldvcpus;
@@ -3578,11 +3578,11 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr 
driver,
 goto cleanup;
 }

-if (ncpupids != vcpus) {
+if (ncpuinfo != vcpus) {
 virReportError(VIR_ERR_INTERNAL_ERROR,
_(got wrong number of vCPU pids from QEMU monitor. 
  got %d, wanted %d),
-   ncpupids, vcpus);
+   ncpuinfo, vcpus);
 ret = -1;
 goto cleanup;
 }
@@ -3602,11 +3602,11 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr 
driver,
 }

 /* Add vcpu thread to the cgroup */
-rv = virCgroupAddTask(cgroup_vcpu, cpupids[i]);
+rv = virCgroupAddTask(cgroup_vcpu, cpuinfo[i].thread_id);
 if (rv  0) {
 virReportSystemError(-rv,
  _(unable to add vcpu %d task %d to 
cgroup),
- i, cpupids[i]);
+ i, cpuinfo[i].thread_id);
 virCgroupRemove(cgroup_vcpu);
 goto cleanup;
 }
@@ -3646,7 +3646,7 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
 goto cleanup;
 }
 } else {
-if (virProcessSetAffinity(cpupids[i],
+if (virProcessSetAffinity(cpuinfo[i].thread_id,
   vcpupin-cpumask)  0) {
 virReportError(VIR_ERR_SYSTEM_ERROR,
_(failed to set cpu affinity for vcpu 
%d),
@@ -3687,15 +3687,18 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr 
driver,
 }
 }

-priv-nvcpupids = ncpupids;
-VIR_FREE(priv-vcpupids);
-priv-vcpupids = cpupids;
-cpupids = NULL;
+if (VIR_REALLOC_N(priv-vcpupids, ncpuinfo)  0) {
+virReportOOMError();
+goto cleanup;
+}
+
+/* copy the thread id's */
+for (i = 0; i  ncpuinfo; i++)
+priv-vcpupids[i] = cpuinfo[i].thread_id;

 cleanup:
 qemuDomainObjExitMonitor(driver, vm);
 vm-def-vcpus = vcpus;
-VIR_FREE(cpupids);
 virDomainAuditVcpu(vm, oldvcpus, nvcpus, update, rc == 1);
 if (cgroup_vcpu)
 virCgroupFree(cgroup_vcpu);
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index af85c29..783af2e 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -1281,8 +1281,9 @@ int qemuMonitorSystemReset(qemuMonitorPtr mon)
 }


-int qemuMonitorGetCPUInfo(qemuMonitorPtr mon,
-  int **pids)
+int
+qemuMonitorGetCPUInfo(qemuMonitorPtr mon,
+  qemuMonitorCPUInfoPtr *info)
 {
 int ret;
 VIR_DEBUG(mon=%p, mon);
@@ -1294,9 +1295,9 @@ int qemuMonitorGetCPUInfo(qemuMonitorPtr mon,
   

[libvirt] [PATCH 11/11] qemu: Implement virDomainSetGuetVcpu in qemu driver

2013-05-27 Thread Peter Krempa
Use the helper added earlier to implement this new API.
---
 src/qemu/qemu_driver.c | 65 ++
 1 file changed, 65 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 70acdbb..809807f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15231,6 +15231,70 @@ cleanup:
 }
 #undef MATCH

+
+static int
+qemuDomainSetGuestVcpu(virDomainPtr dom,
+   unsigned int id,
+   unsigned int online,
+   unsigned int flags)
+{
+virQEMUDriverPtr driver = dom-conn-privateData;
+virDomainObjPtr vm;
+qemuDomainObjPrivatePtr priv;
+int ret = -1;
+
+qemuAgentCPUInfo agentinfo = { .id = id,
+   .online = !!online };
+
+virCheckFlags(0, -1);
+
+if (!(vm = qemuDomObjFromDomain(dom)))
+return -1;
+
+priv = vm-privateData;
+
+/* request data from the guest */
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY)  0)
+goto cleanup;
+
+if (!virDomainObjIsActive(vm)) {
+virReportError(VIR_ERR_OPERATION_INVALID, %s,
+   _(domain is not running));
+goto endjob;
+}
+
+if (!priv-agent) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, %s,
+   _(guest agent is not configured));
+goto endjob;
+}
+
+qemuDomainObjEnterAgent(vm);
+ret = qemuAgentSetVCPUs(priv-agent, agentinfo, 1);
+qemuDomainObjExitAgent(vm);
+
+endjob:
+if (qemuDomainObjEndJob(driver, vm) == 0)
+vm = NULL;
+
+if (ret  0)
+goto cleanup;
+
+if (ret != 0) {
+virReportError(VIR_ERR_OPERATION_FAILED,
+   _(failed to set state of vCPU '%u'), id);
+goto cleanup;
+}
+
+ret = 0;
+
+cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
+
 static virDriver qemuDriver = {
 .no = VIR_DRV_QEMU,
 .name = QEMU_DRIVER_NAME,
@@ -15411,6 +15475,7 @@ static virDriver qemuDriver = {
 .domainFSTrim = qemuDomainFSTrim, /* 1.0.1 */
 .domainOpenChannel = qemuDomainOpenChannel, /* 1.0.2 */
 .domainGetVCPUMap = qemuDomainGetVCPUMap, /* 1.0.7 */
+.domainSetGuestVcpu = qemuDomainSetGuestVcpu, /* 1.0.7 */
 };


-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 06/11] virsh-domain-monitor: Implement command to map guest vCPUs

2013-05-27 Thread Peter Krempa
Add support for the virDomainGetVCPUMap API. The code is based on node
vcpu mapping.
---
 tools/virsh-domain-monitor.c | 110 +++
 tools/virsh.pod  |  23 +
 2 files changed, 133 insertions(+)

diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 936fa8e..8814705 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -1806,6 +1806,111 @@ cleanup:
 }
 #undef FILTER

+static const vshCmdInfo info_VCPUMap[] = {
+{.name = help,
+ .data = N_(request map of virtual processors of a domain)
+},
+{.name = desc,
+ .data = N_(Returns the map of virtual processors of a domain)
+},
+{.name = NULL}
+};
+
+static const vshCmdOptDef opts_VCPUMap[] = {
+{.name = domain,
+ .type = VSH_OT_DATA,
+ .flags = VSH_OFLAG_REQ,
+ .help = N_(domain name, id or uuid)
+},
+{.name = agent,
+ .type = VSH_OT_BOOL,
+ .help = N_(use guest agent)
+},
+{.name = hypervisor,
+ .type = VSH_OT_BOOL,
+ .help = N_(show hypervisor data)
+},
+{.name = online,
+ .type = VSH_OT_BOOL,
+ .help = N_(show online cpus)
+},
+{.name = offlinable,
+ .type = VSH_OT_BOOL,
+ .help = N_(show offlinable vCPUs)
+},
+{.name = offline,
+ .type = VSH_OT_BOOL,
+ .help = N_(show offline cpus)
+},
+{.name = possible,
+ .type = VSH_OT_BOOL,
+ .help = N_(show all possible vCPUs)
+},
+{.name = active,
+ .type = VSH_OT_BOOL,
+ .help = N_(show currently active vCPUs)
+},
+{.name = NULL}
+};
+
+static bool
+cmdVCPUMap(vshControl *ctl, const vshCmd *cmd)
+{
+int cpu, cpunum;
+unsigned char *cpumap = NULL;
+bool ret = false;
+virDomainPtr dom = NULL;
+unsigned int flags = 0;
+bool agent = vshCommandOptBool(cmd, agent);
+bool hypervisor = vshCommandOptBool(cmd, hypervisor);
+bool online = vshCommandOptBool(cmd, online);
+bool offline = vshCommandOptBool(cmd, offline);
+
+VSH_EXCLUSIVE_OPTIONS_VAR(agent, hypervisor);
+VSH_EXCLUSIVE_OPTIONS_VAR(online, offline);
+
+if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+return false;
+
+if (agent)
+flags |= VIR_DOMAIN_VCPU_MAP_AGENT;
+
+if (hypervisor)
+flags |= VIR_DOMAIN_VCPU_MAP_HYPERVISOR;
+
+if (online)
+flags |= VIR_DOMAIN_VCPU_MAP_ONLINE;
+
+if (offline)
+flags |= VIR_DOMAIN_VCPU_MAP_OFFLINE;
+
+if (vshCommandOptBool(cmd, offlinable))
+flags |= VIR_DOMAIN_VCPU_MAP_OFFLINABLE;
+
+if (vshCommandOptBool(cmd, possible))
+flags |= VIR_DOMAIN_VCPU_MAP_POSSIBLE;
+
+if (vshCommandOptBool(cmd, active))
+flags |= VIR_DOMAIN_VCPU_MAP_ACTIVE;
+
+if ((cpunum = virDomainGetVCPUMap(dom, cpumap, flags))  0) {
+vshError(ctl, %s, _(Unable to get cpu map));
+goto cleanup;
+}
+
+vshPrint(ctl, %-15s , _(CPU map:));
+for (cpu = 0; cpu  cpunum; cpu++)
+vshPrint(ctl, %c, VIR_CPU_USED(cpumap, cpu) ? 'y' : '-');
+vshPrint(ctl, \n);
+
+ret = true;
+
+cleanup:
+virDomainFree(dom);
+VIR_FREE(cpumap);
+return ret;
+}
+
 const vshCmdDef domMonitoringCmds[] = {
 {.name = domblkerror,
  .handler = cmdDomBlkError,
@@ -1879,5 +1984,10 @@ const vshCmdDef domMonitoringCmds[] = {
  .info = info_list,
  .flags = 0
 },
+{.name = vcpumap,
+ .handler = cmdVCPUMap,
+ .opts = opts_VCPUMap,
+ .info = info_VCPUMap,
+},
 {.name = NULL}
 };
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 11984bc..8037a0d 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -1797,6 +1797,29 @@ If no flag is specified, behavior is different depending 
on hypervisor.
 Output the IP address and port number for the VNC display. If the information
 is not available the processes will provide an exit code of 1.

+=item Bvcpumap Idomain [I--hypervisor | I--agent]
+[I--active | I--online | I--offline | I--offlinable | I--possible]
+
+Query a map of vCPUs of a domain.
+
+When I--agent is specified the guest agent is queried for the state of the
+vCPUs from the point of view of the guest. Otherwise I--hypervisor is
+assumed.
+
+If I--possible is specified all cpu ID's referenced by the guest are listed.
+The I--offlinable flag limits the view on vCPUs that can be disabled.
+I--online and I--offline limit the output according to the state of the
+vCPU. With I--active specified only vCPUs that are active are listed.
+The flags may be combined to filter the results further.
+
+With no flag specified, libvirt assumes that I--online and I--hypervisor
+is requested.
+
+I--agent and I--hypervisor as well as I--online and I--offline are
+mutually exclusive options.
+
+BNote: Some flags may not be supported with both query modes.
+
 =back

 =head1 DEVICE COMMANDS
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 10/11] virsh-domain: Implement command for virDomainSetGuestVcpu

2013-05-27 Thread Peter Krempa
Add a virsh command called setguestvcpu to excercise the
virDomainSetGuestVcpu API.
---
 tools/virsh-domain.c | 77 
 tools/virsh.pod  |  6 
 2 files changed, 83 insertions(+)

diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 0402aef..6853613 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -5866,6 +5866,77 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd)
 }

 /*
+ * setguestvcpu command
+ */
+static const vshCmdInfo info_setguestvcpu[] = {
+{.name = help,
+ .data = N_(modify the state of a guest's virtual CPU)
+},
+{.name = desc,
+ .data = N_(Enable or disable virtual CPUs in a guest using the guest 
agent.)
+},
+{.name = NULL}
+};
+
+static const vshCmdOptDef opts_setguestvcpu[] = {
+{.name = domain,
+ .type = VSH_OT_DATA,
+ .flags = VSH_OFLAG_REQ,
+ .help = N_(domain name, id or uuid)
+},
+{.name = id,
+ .type = VSH_OT_INT,
+ .flags = VSH_OFLAG_REQ,
+ .help = N_(id of the virtual CPU)
+},
+{.name = online,
+ .type = VSH_OT_BOOL,
+ .help = N_(enable the vCPU)
+},
+{.name = offline,
+ .type = VSH_OT_BOOL,
+ .help = N_(disable the vCPU)
+},
+{.name = NULL}
+};
+
+static bool
+cmdSetguestvcpu(vshControl *ctl, const vshCmd *cmd)
+{
+virDomainPtr dom;
+unsigned int id;
+bool online = vshCommandOptBool(cmd, online);
+bool offline = vshCommandOptBool(cmd, offline);
+bool ret = false;
+
+VSH_EXCLUSIVE_OPTIONS_VAR(online, offline);
+
+if (!online  !offline) {
+vshError(ctl, %s,
+ _(need to specify either --online or --offline));
+return false;
+}
+
+if (vshCommandOptUInt(cmd, id, id)  0) {
+vshError(ctl, %s, _(Invalid or missing cpu ID));
+return false;
+}
+
+if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+return false;
+
+if (virDomainSetGuestVcpu(dom, id, online, 0)  0)
+goto cleanup;
+
+ret = true;
+
+cleanup:
+virDomainFree(dom);
+return ret;
+}
+
+
+/*
  * cpu-compare command
  */
 static const vshCmdInfo info_cpu_compare[] = {
@@ -10514,6 +10585,12 @@ const vshCmdDef domManagementCmds[] = {
  .info = info_setvcpus,
  .flags = 0
 },
+{.name = setguestvcpu,
+ .handler = cmdSetguestvcpu,
+ .opts = opts_setguestvcpu,
+ .info = info_setguestvcpu,
+ .flags = 0
+},
 {.name = shutdown,
  .handler = cmdShutdown,
  .opts = opts_shutdown,
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 8037a0d..b34a47d 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -1749,6 +1749,12 @@ state of the domain (corresponding to I--live if 
running, or
 I--config if inactive); these three flags are mutually exclusive.
 Thus, this command always takes exactly zero or two flags.

+=item Bsetguestvcpu Idomain Icpu id [{I--online | I--offline}]
+
+Set the online state of a vCPU in the guest. This command
+requires the guest agent configured. The state may be either
+I--online or I--offline. The flags are mutually exclusive.
+
 =item Bvcpuinfo Idomain

 Returns basic information about the domain virtual CPUs, like the number of
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 09/11] lib: Add API to modify vCPU state from the guest using the guest agent

2013-05-27 Thread Peter Krempa
This patch introduces API virDomainSetGuestVcpus that will be used to
work with vCPU state from the point of view of the guest using the guest
agent.
---
 include/libvirt/libvirt.h.in |  4 
 src/driver.h |  8 +++
 src/libvirt.c| 55 
 src/libvirt_public.syms  |  1 +
 src/remote/remote_driver.c   |  1 +
 src/remote/remote_protocol.x | 15 +++-
 src/remote_protocol-structs  |  7 ++
 7 files changed, 90 insertions(+), 1 deletion(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 46d499c..c4c8224 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2127,6 +2127,10 @@ int virDomainSetVcpus   
(virDomainPtr domain,
 int virDomainSetVcpusFlags  (virDomainPtr domain,
  unsigned int nvcpus,
  unsigned int flags);
+int virDomainSetGuestVcpu   (virDomainPtr domain,
+ unsigned int id,
+ unsigned int online,
+ unsigned int flags);
 int virDomainGetVcpusFlags  (virDomainPtr domain,
  unsigned int flags);

diff --git a/src/driver.h b/src/driver.h
index 0caa2d6..eca5f1d 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -1052,6 +1052,13 @@ typedef int
   unsigned char **cpumap,
   unsigned int flags);

+typedef int
+(*virDrvDomainSetGuestVcpu)(virDomainPtr dom,
+unsigned int id,
+unsigned int online,
+unsigned int flags);
+
+
 typedef struct _virDriver virDriver;
 typedef virDriver *virDriverPtr;

@@ -1254,6 +1261,7 @@ struct _virDriver {
 virDrvDomainSendProcessSignal domainSendProcessSignal;
 virDrvDomainLxcOpenNamespace domainLxcOpenNamespace;
 virDrvDomainGetVCPUMap domainGetVCPUMap;
+virDrvDomainSetGuestVcpu domainSetGuestVcpu;
 };


diff --git a/src/libvirt.c b/src/libvirt.c
index 3f51e83..9c3bbe6 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -8947,6 +8947,61 @@ error:
 return -1;
 }

+
+/**
+ * virDomainSetGuestVcpu:
+ * @domain: pointer to domain object, or NULL for Domain0
+ * @id: vcpu ID in the guest
+ * @online: desired state of the vcpu
+ * @flags: currently unused, callers should pass 0
+ *
+ * Dynamically change the state of a virtual CPUs used by the domain by
+ * using the guest agent. The vCPU id used is from the point of view of
+ * the guest.
+ *
+ * Returns 0 in case of success, -1 in case of failure.
+ */
+
+int
+virDomainSetGuestVcpu(virDomainPtr domain,
+  unsigned int id,
+  unsigned int online,
+  unsigned int flags)
+{
+virConnectPtr conn;
+
+VIR_DOMAIN_DEBUG(domain, id=%u, online=%u, flags=%x, id, online, flags);
+
+virResetLastError();
+
+if (!VIR_IS_CONNECTED_DOMAIN(domain)) {
+virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__);
+virDispatchError(NULL);
+return -1;
+}
+if (domain-conn-flags  VIR_CONNECT_RO) {
+virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+goto error;
+}
+
+conn = domain-conn;
+
+if (conn-driver-domainSetGuestVcpu) {
+int ret;
+ret = conn-driver-domainSetGuestVcpu(domain, id, online, flags);
+if (ret  0)
+goto error;
+return ret;
+}
+
+virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+virDispatchError(domain-conn);
+return -1;
+}
+
+
 /**
  * virDomainGetVcpusFlags:
  * @domain: pointer to domain object, or NULL for Domain0
diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms
index 04465be..bbb7c77 100644
--- a/src/libvirt_public.syms
+++ b/src/libvirt_public.syms
@@ -624,6 +624,7 @@ LIBVIRT_1.0.6 {
 LIBVIRT_1.0.7 {
 global:
 virDomainGetVCPUMap;
+virDomainSetGuestVcpu;
 } LIBVIRT_1.0.6;


diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index 99fa3c1..4dca3eb 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -6395,6 +6395,7 @@ static virDriver remote_driver = {
 .domainFSTrim = remoteDomainFSTrim, /* 1.0.1 */
 .domainLxcOpenNamespace = remoteDomainLxcOpenNamespace, /* 1.0.2 */
 .domainGetVCPUMap = remoteDomainGetVCPUMap, /* 1.0.7 */
+.domainSetGuestVcpu = remoteDomainSetGuestVcpu, /* 1.0.7 */
 };

 static virNetworkDriver network_driver = {
diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x
index 7db6be3..bcc578f 100644
--- a/src/remote/remote_protocol.x
+++ b/src/remote/remote_protocol.x
@@ -2748,6 +2748,13 @@ struct remote_domain_get_vcpu_map_ret {
 int ret;
 };


[libvirt] [PATCH 02/11] qemu: Use bool instead of int in qemuMonitorSetCPU APIs

2013-05-27 Thread Peter Krempa
The 'online' parameter has only two possible values. Use a bool for it.
---
 src/qemu/qemu_driver.c   | 4 ++--
 src/qemu/qemu_monitor.c  | 2 +-
 src/qemu/qemu_monitor.h  | 2 +-
 src/qemu/qemu_monitor_json.c | 2 +-
 src/qemu/qemu_monitor_json.h | 2 +-
 src/qemu/qemu_monitor_text.c | 2 +-
 src/qemu/qemu_monitor_text.h | 2 +-
 7 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 4a76f14..4a4f8d3 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -3534,7 +3534,7 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
 if (nvcpus  vcpus) {
 for (i = vcpus; i  nvcpus; i++) {
 /* Online new CPU */
-rc = qemuMonitorSetCPU(priv-mon, i, 1);
+rc = qemuMonitorSetCPU(priv-mon, i, true);
 if (rc == 0)
 goto unsupported;
 if (rc  0)
@@ -3545,7 +3545,7 @@ static int qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
 } else {
 for (i = vcpus - 1; i = nvcpus; i--) {
 /* Offline old CPU */
-rc = qemuMonitorSetCPU(priv-mon, i, 0);
+rc = qemuMonitorSetCPU(priv-mon, i, false);
 if (rc == 0)
 goto unsupported;
 if (rc  0)
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 4e35f79..af85c29 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -1669,7 +1669,7 @@ int qemuMonitorSetBalloon(qemuMonitorPtr mon,
 }


-int qemuMonitorSetCPU(qemuMonitorPtr mon, int cpu, int online)
+int qemuMonitorSetCPU(qemuMonitorPtr mon, int cpu, bool online)
 {
 int ret;
 VIR_DEBUG(mon=%p cpu=%d online=%d, mon, cpu, online);
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index a607712..3d9afa3 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -299,7 +299,7 @@ int qemuMonitorExpirePassword(qemuMonitorPtr mon,
   const char *expire_time);
 int qemuMonitorSetBalloon(qemuMonitorPtr mon,
   unsigned long newmem);
-int qemuMonitorSetCPU(qemuMonitorPtr mon, int cpu, int online);
+int qemuMonitorSetCPU(qemuMonitorPtr mon, int cpu, bool online);


 /* XXX should we pass the virDomainDiskDefPtr instead
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 2b73884..26bf09b 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2076,7 +2076,7 @@ cleanup:
  * or -1 on failure
  */
 int qemuMonitorJSONSetCPU(qemuMonitorPtr mon,
-  int cpu, int online)
+  int cpu, bool online)
 {
 /* XXX Update to use QMP, if QMP ever adds support for cpu hotplug */
 VIR_DEBUG(no QMP support for cpu_set, trying HMP);
diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h
index 74e2476..d79b86b 100644
--- a/src/qemu/qemu_monitor_json.h
+++ b/src/qemu/qemu_monitor_json.h
@@ -94,7 +94,7 @@ int qemuMonitorJSONExpirePassword(qemuMonitorPtr mon,
   const char *expire_time);
 int qemuMonitorJSONSetBalloon(qemuMonitorPtr mon,
   unsigned long newmem);
-int qemuMonitorJSONSetCPU(qemuMonitorPtr mon, int cpu, int online);
+int qemuMonitorJSONSetCPU(qemuMonitorPtr mon, int cpu, bool online);

 int qemuMonitorJSONEjectMedia(qemuMonitorPtr mon,
   const char *dev_name,
diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c
index d4ee93d..aa4145e 100644
--- a/src/qemu/qemu_monitor_text.c
+++ b/src/qemu/qemu_monitor_text.c
@@ -1231,7 +1231,7 @@ int qemuMonitorTextSetBalloon(qemuMonitorPtr mon,
  * Returns: 0 if CPU hotplug not supported, +1 if CPU hotplug worked
  * or -1 on failure
  */
-int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, int online)
+int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, bool online)
 {
 char *cmd;
 char *reply = NULL;
diff --git a/src/qemu/qemu_monitor_text.h b/src/qemu/qemu_monitor_text.h
index fb8e904..5218a8b 100644
--- a/src/qemu/qemu_monitor_text.h
+++ b/src/qemu/qemu_monitor_text.h
@@ -90,7 +90,7 @@ int qemuMonitorTextExpirePassword(qemuMonitorPtr mon,
   const char *expire_time);
 int qemuMonitorTextSetBalloon(qemuMonitorPtr mon,
   unsigned long newmem);
-int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, int online);
+int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, bool online);

 int qemuMonitorTextEjectMedia(qemuMonitorPtr mon,
   const char *dev_name,
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 04/11] qemu_agent: Introduce helpers for agent based CPU hot(un)plug

2013-05-27 Thread Peter Krempa
The qemu guest agent allows to online and offline CPUs from the
perspective of the guest. This patch adds helpers that call
'guest-get-vcpus' and 'guest-set-vcpus' guest agent functions and
convert the data for internal libvirt usage.
---
 src/qemu/qemu_agent.c | 148 ++
 src/qemu/qemu_agent.h |  12 
 2 files changed, 160 insertions(+)

diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c
index c7a9681..b094763 100644
--- a/src/qemu/qemu_agent.c
+++ b/src/qemu/qemu_agent.c
@@ -1456,3 +1456,151 @@ qemuAgentFSTrim(qemuAgentPtr mon,
 virJSONValueFree(reply);
 return ret;
 }
+
+int
+qemuAgentGetVCPUs(qemuAgentPtr mon,
+  qemuAgentCPUInfoPtr *info)
+{
+int ret = -1;
+int i;
+virJSONValuePtr cmd;
+virJSONValuePtr reply = NULL;
+virJSONValuePtr data = NULL;
+int ndata;
+
+if (!(cmd = qemuAgentMakeCommand(guest-get-vcpus, NULL)))
+return -1;
+
+if (qemuAgentCommand(mon, cmd, reply,
+ VIR_DOMAIN_QEMU_AGENT_COMMAND_BLOCK)  0 ||
+qemuAgentCheckError(cmd, reply)  0)
+goto cleanup;
+
+if (!(data = virJSONValueObjectGet(reply, return))) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(guest-get-vcpus reply was missing return data));
+goto cleanup;
+}
+
+if (data-type != VIR_JSON_TYPE_ARRAY) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(guest-get-vcpus return information was not an 
array));
+goto cleanup;
+}
+
+ndata = virJSONValueArraySize(data);
+
+if (VIR_ALLOC_N(*info, ndata)  0) {
+virReportOOMError();
+goto cleanup;
+}
+
+for (i = 0; i  ndata; i++) {
+virJSONValuePtr entry = virJSONValueArrayGet(data, i);
+qemuAgentCPUInfoPtr in = *info + i;
+
+if (!entry) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(array element missing in guest-get-vcpus return 
+ value));
+goto cleanup;
+}
+
+if (virJSONValueObjectGetNumberUint(entry, logical-id, in-id)  0) 
{
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _('logical-id' missing in reply of 
guest-get-vcpus));
+goto cleanup;
+}
+
+if (virJSONValueObjectGetBoolean(entry, online, in-online)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _('online' missing in reply of guest-get-vcpus));
+goto cleanup;
+}
+
+if (virJSONValueObjectGetBoolean(entry, can-offline,
+ in-offlinable)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _('can-offline' missing in reply of 
guest-get-vcpus));
+goto cleanup;
+}
+}
+
+ret = ndata;
+
+cleanup:
+virJSONValueFree(cmd);
+virJSONValueFree(reply);
+virJSONValueFree(data);
+return ret;
+}
+
+/**
+ * Set the VCPU state using guest agent.
+ *
+ * Returns -1 on error, ninfo in case everything was successful and less than
+ * ninfo on a partial failure.
+ */
+int
+qemuAgentSetVCPUs(qemuAgentPtr mon,
+  qemuAgentCPUInfoPtr info,
+  size_t ninfo)
+{
+int ret = -1;
+virJSONValuePtr cmd = NULL;
+virJSONValuePtr reply = NULL;
+virJSONValuePtr cpus = NULL;
+virJSONValuePtr cpu = NULL;
+size_t i;
+
+/* create the key data array */
+if (!(cpus = virJSONValueNewArray()))
+goto no_memory;
+
+for (i = 0; i  ninfo; i++) {
+qemuAgentCPUInfoPtr in = info[i];
+
+/* create single cpu object */
+if (!(cpu = virJSONValueNewObject()))
+goto no_memory;
+
+if (virJSONValueObjectAppendNumberInt(cpu, logical-id, in-id)  0)
+goto no_memory;
+
+if (virJSONValueObjectAppendBoolean(cpu, online, in-online)  0)
+goto no_memory;
+
+if (virJSONValueArrayAppend(cpus, cpu)  0)
+goto no_memory;
+
+cpu = NULL;
+}
+
+if (!(cmd = qemuAgentMakeCommand(guest-set-vcpus,
+ a:vcpus, cpus,
+ NULL)))
+goto cleanup;
+
+cpus = NULL;
+
+if (qemuAgentCommand(mon, cmd, reply,
+ VIR_DOMAIN_QEMU_AGENT_COMMAND_BLOCK)  0 ||
+qemuAgentCheckError(cmd, reply)  0)
+goto cleanup;
+
+if (virJSONValueObjectGetNumberInt(reply, return, ret)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(malformed return value));
+}
+
+cleanup:
+virJSONValueFree(cmd);
+virJSONValueFree(reply);
+virJSONValueFree(cpu);
+virJSONValueFree(cpus);
+return ret;
+
+no_memory:
+virReportOOMError();
+goto cleanup;
+}
diff --git a/src/qemu/qemu_agent.h b/src/qemu/qemu_agent.h
index ba04e61..cf70653 

[libvirt] [libvirt-tck PATCH] 121-block-info.t: omit network

2013-05-27 Thread Guido Günther
qemu:///session doesn't have a default network so we fail with:

 ./scripts/domain/121-block-info.t .. 1/29 # Defining transient storage pool
 # Generic guest with pervious created vol
 ./scripts/domain/121-block-info.t .. 12/29
 #   Failed test 'Create domain'
 #   at /var/lib/jenkins/jobs/libvirt-tck-build/workspace/lib//Sys/Virt/TCK.pm 
line 803.
 # expected Sys::Virt::Domain object
 # found 'libvirt error code: 43, message: Network not found: no network with 
matching name 'default'
 # '

The network isn't needed so simply leave it out.
---
 scripts/domain/121-block-info.t |1 -
 1 file changed, 1 deletion(-)

diff --git a/scripts/domain/121-block-info.t b/scripts/domain/121-block-info.t
index a25d075..dad00c6 100644
--- a/scripts/domain/121-block-info.t
+++ b/scripts/domain/121-block-info.t
@@ -91,7 +91,6 @@ $guest-rmdisk();
 $guest-disk(format = { name = qemu, type = $disktype }, type = file, 
src = $path, dst = $dst);
 $guest-disk(format = { name = qemu, type = $disktype }, type = file, 
src= $path2, dst = $dst2);
 $guest-disk(format = { name = qemu, type = qcow2 }, type = file, 
src= $path3, dst = $dst3);
-$guest-interface(type = network, source = default, model = virtio, 
mac = 52:54:00:22:22:22);
 
 $xml = $guest-as_xml;
 my $dom;
-- 
1.7.10.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [libvirt-tck PATCH] 121-block-info.t: allow for greater capacity/allocation

2013-05-27 Thread Guido Günther
Don't be too picky about the actual size and allocation. The size needs
to be correct but it's o.k. for allocation and size to be bigger. Debian
Wheezy's qemu adds 4096 bytes.
---
 scripts/domain/121-block-info.t |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/scripts/domain/121-block-info.t b/scripts/domain/121-block-info.t
index dad00c6..4c3fafc 100644
--- a/scripts/domain/121-block-info.t
+++ b/scripts/domain/121-block-info.t
@@ -103,8 +103,8 @@ is($dom-get_block_info($dst2,0)-{physical}, 1024*1024, 
Get disk physical info
 
 
 is($dom-get_block_info($dst,0)-{capacity}, 1024*1024*50, Get disk capacity 
info);
-is($dom-get_block_info($dst,0)-{allocation}, 1024*1024*50, Get disk 
allocation info);
-is($dom-get_block_info($dst,0)-{physical}, 1024*1024*50, Get disk physical 
info);
+ok($dom-get_block_info($dst,0)-{allocation} = 1024*1024*50, Get disk 
allocation info);
+ok($dom-get_block_info($dst,0)-{physical} = 1024*1024*50, Get disk 
physical info);
 
 diag Test block_resize;
 lives_ok(sub {$dom-block_resize($dst, 512*50)}, resize to 512*50 KB);
@@ -112,8 +112,8 @@ $st = stat($path);
 is($st-size, 512*1024*50, size is 25M);
 
 is($dom-get_block_info($dst,0)-{capacity}, 1024*512*50, Get disk capacity 
info);
-is($dom-get_block_info($dst,0)-{allocation}, 1024*512*50, Get disk 
allocation info);
-is($dom-get_block_info($dst,0)-{physical}, 1024*512*50, Get disk physical 
info);
+ok($dom-get_block_info($dst,0)-{allocation} = 1024*512*50, Get disk 
allocation info);
+ok($dom-get_block_info($dst,0)-{physical} = 1024*512*50, Get disk physical 
info);
 
 lives_ok(sub {$dom-block_resize($dst, 1024*50)}, resize to 1024*50 KB);
 $st = stat($path);
-- 
1.7.10.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Libvirt-announce] Entering freeze for libvirt-1.0.6

2013-05-27 Thread Justin Clift
On 27/05/2013, at 11:44 PM, Daniel Veillard wrote:
  As planned, I just tagged the rc1 release in git and pushed
 tarball and rpms to the FTP repository:
  ftp://libvirt.org/
 
 So focuse should be given to bug fixes this week, and if everything
 goes well I will make the release on Monday 3, a week from now.
 I gave a bit of testing to the rc1 rpms, seems to work okay for KVM on
 Fedora but further testing would  be a good idea, on other
 distro/platforms and also other hypervisor, especially with VirtualBox
 as the driver was mdified to now rely on the daemon for support of
 that hypervisor.


Seems to compile fine on OSX.  Haven't tried to run it.

Sounds like VirtualBox won't work on OSX any more, as the libvirt
daemon doesn't compile there (last I heard).  It's possibly that's
changed am I'm just wrong though. ;)

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 1/2] util/viriptables: add/remove rules that short-circuit masquerading

2013-05-27 Thread Laszlo Ersek
The functions
- iptablesAddForwardDontMasquerade(),
- iptablesRemoveForwardDontMasquerade
handle exceptions in the masquerading implemented in the POSTROUTING chain
of the nat table. Such exceptions should be added as chronologically
latest, logically top-most rules.

The bridge driver will call these functions beginning with the next patch:
some special destination IP addresses always refer to the local
subnetwork, even though they don't match any practical subnetwork's
netmask. Packets from virbrN targeting such IP addresses are never routed
outwards, but the current rules treat them as non-virbrN-destined packets
and masquerade them. This causes problems for some receivers on virbrN.

Signed-off-by: Laszlo Ersek ler...@redhat.com
---
 src/util/viriptables.h   |   10 +
 src/util/viriptables.c   |   93 ++
 src/libvirt_private.syms |2 +
 3 files changed, 105 insertions(+), 0 deletions(-)

diff --git a/src/util/viriptables.h b/src/util/viriptables.h
index b7ce59b..200dbbc 100644
--- a/src/util/viriptables.h
+++ b/src/util/viriptables.h
@@ -117,6 +117,16 @@ int  iptablesRemoveForwardMasquerade 
(iptablesContext *ctx,
   virSocketAddrRangePtr addr,
   virPortRangePtr port,
   const char *protocol);
+int  iptablesAddForwardDontMasquerade(iptablesContext *ctx,
+  virSocketAddr *netaddr,
+  unsigned int prefix,
+  const char *physdev,
+  const char *destaddr);
+int  iptablesRemoveForwardDontMasquerade(iptablesContext *ctx,
+  virSocketAddr *netaddr,
+  unsigned int prefix,
+  const char *physdev,
+  const char *destaddr);
 int  iptablesAddOutputFixUdpChecksum (iptablesContext *ctx,
   const char *iface,
   int port);
diff --git a/src/util/viriptables.c b/src/util/viriptables.c
index 16fbe9c..c6a9f6b 100644
--- a/src/util/viriptables.c
+++ b/src/util/viriptables.c
@@ -961,6 +961,99 @@ iptablesRemoveForwardMasquerade(iptablesContext *ctx,
 }
 
 
+/* Don't masquerade traffic coming from the network associated with the bridge
+ * if said traffic targets @destaddr.
+ */
+static int
+iptablesForwardDontMasquerade(iptablesContext *ctx,
+  virSocketAddr *netaddr,
+  unsigned int prefix,
+  const char *physdev,
+  const char *destaddr,
+  int action)
+{
+int ret = -1;
+char *networkstr = NULL;
+virCommandPtr cmd = NULL;
+
+if (!(networkstr = iptablesFormatNetwork(netaddr, prefix)))
+return -1;
+
+if (!VIR_SOCKET_ADDR_IS_FAMILY(netaddr, AF_INET)) {
+/* Higher level code *should* guaranteee it's impossible to get here. 
*/
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _(Attempted to NAT '%s'. NAT is only supported for 
IPv4.),
+   networkstr);
+goto cleanup;
+}
+
+cmd = iptablesCommandNew(ctx-nat_postrouting, AF_INET, action);
+
+if (physdev  physdev[0])
+virCommandAddArgList(cmd, --out-interface, physdev, NULL);
+
+virCommandAddArgList(cmd, --source, networkstr,
+ --destination, destaddr, --jump, RETURN, NULL);
+ret = virCommandRun(cmd, NULL);
+cleanup:
+virCommandFree(cmd);
+VIR_FREE(networkstr);
+return ret;
+}
+
+/**
+ * iptablesAddForwardDontMasquerade:
+ * @ctx: pointer to the IP table context
+ * @netaddr: the source network name
+ * @prefix: prefix (# of 1 bits) of netmask to apply to @netaddr
+ * @physdev: the physical output device or NULL
+ * @destaddr: the destination network not to masquerade for
+ *
+ * Add rules to the IP table context to avoid masquerading from
+ * @netaddr/@prefix to @destaddr on @physdev. @destaddr must be in a format
+ * directly consumable by iptables, it must not depend on user input or
+ * configuration.
+ *
+ * Returns 0 in case of success or an error code otherwise.
+ */
+int
+iptablesAddForwardDontMasquerade(iptablesContext *ctx,
+ virSocketAddr *netaddr,
+ unsigned int prefix,
+ const char *physdev,
+ const char *destaddr)
+{
+return iptablesForwardDontMasquerade(ctx, netaddr, prefix, physdev,
+ destaddr, ADD);

[libvirt] [PATCH 0/2] don't masquerade local broadcast/multicast packets

2013-05-27 Thread Laszlo Ersek
Masquerading local broadcast breaks DHCP replies for some clients.
There has been a report about broken local multicast too.
(See references in the patches.)

Testing: since I have no idea how to test upstream libvirt on a
RHEL-6.4.z virt host and guarantee nothing will be tangled up, I ported
the series to libvirt-0.10.2-18.el6_4.5 and tested that. (The upstream
series does build and passes the checks in HACKING, except I didn't try
valgrind.)

Laszlo Ersek (2):
  util/viriptables: add/remove rules that short-circuit masquerading
  bridge driver: don't masquerade local subnet broadcast/multicast
packets

 src/util/viriptables.h  |   10 +
 src/network/bridge_driver.c |   76 +--
 src/util/viriptables.c  |   93 +++
 src/libvirt_private.syms|2 +
 4 files changed, 177 insertions(+), 4 deletions(-)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 2/2] bridge driver: don't masquerade local subnet broadcast/multicast packets

2013-05-27 Thread Laszlo Ersek
Packets sent by guests on virbrN, *or* by dnsmasq on the same, to
- 255.255.255.255/32 (netmask-independent local network broadcast
  address), or to
- 224.0.0.0/24 (local subnetwork multicast range)
are never forwarded, hence it is not necessary to masquerade them.

In fact we must not masquerade them: translating their source addresses or
source ports (where applicable) may confuse receivers on virbrN.

One example is the DHCP client in OVMF (= UEFI firmware for virtual
machines):

  http://thread.gmane.org/gmane.comp.bios.tianocore.devel/1506/focus=2640

It expects DHCP replies to arrive from remote source port 67. Even though
dnsmasq conforms to that, the destination address (255.255.255.255) and
the source address (eg. 192.168.122.1) in the reply allow the UDP
masquerading rule to match, which rewrites the source port to or above
1024. This prevents the DHCP client in OVMF from accepting the packet.

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=709418

(I read attachment 526549 in that BZ by Brian J. Murrell
br...@interlinx.bc.ca, and Laine Stump's review thereof, before starting
this patch.)

Signed-off-by: Laszlo Ersek ler...@redhat.com
---
 src/network/bridge_driver.c |   76 --
 1 files changed, 72 insertions(+), 4 deletions(-)

diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
index d5886fe..2b3b250 100644
--- a/src/network/bridge_driver.c
+++ b/src/network/bridge_driver.c
@@ -1542,6 +1542,9 @@ networkRefreshDaemons(struct network_driver *driver)
 }
 }
 
+static const char networkLocalMulticast[] = 224.0.0.0/24;
+static const char networkLocalBroadcast[] = 255.255.255.255/32;
+
 static int
 networkAddMasqueradingIptablesRules(struct network_driver *driver,
 virNetworkObjPtr network,
@@ -1586,11 +1589,20 @@ networkAddMasqueradingIptablesRules(struct 
network_driver *driver,
 /*
  * Enable masquerading.
  *
- * We need to end up with 3 rules in the table in this order
+ * We need to end up with 5 rules in the table in this order
+ *
+ *  1. do not masquerade packets targeting 224.0.0.0/24
+ *  2. do not masquerade packets targeting 255.255.255.255/32
+ *  3. masquerade protocol=tcp with sport mapping restriction
+ *  4. masquerade protocol=udp with sport mapping restriction
+ *  5. generic, masquerade any protocol
+ *
+ * 224.0.0.0/24 is the local network multicast range. Packets are not
+ * forwarded outside.
  *
- *  1. protocol=tcp with sport mapping restriction
- *  2. protocol=udp with sport mapping restriction
- *  3. generic any protocol
+ * 255.255.255.255/32 is the broadcast address of any local network. Again,
+ * such packets are never forwarded, but strict DHCP clients don't accept
+ * DHCP replies with changed source ports.
  *
  * The sport mappings are required, because default IPtables
  * MASQUERADE maintain port numbers unchanged where possible.
@@ -1660,8 +1672,54 @@ networkAddMasqueradingIptablesRules(struct 
network_driver *driver,
 goto masqerr5;
 }
 
+/* exempt local network broadcast address as destination */
+if (iptablesAddForwardDontMasquerade(driver-iptables,
+ ipdef-address,
+ prefix,
+ forwardIf,
+ networkLocalBroadcast)  0) {
+if (forwardIf)
+virReportError(VIR_ERR_SYSTEM_ERROR,
+   _(failed to add iptables rule to prevent local 
broadcast masquerading on %s),
+   forwardIf);
+else
+virReportError(VIR_ERR_SYSTEM_ERROR, %s,
+   _(failed to add iptables rule to prevent local 
broadcast masquerading));
+goto masqerr6;
+}
+
+/* exempt local multicast range as destination */
+if (iptablesAddForwardDontMasquerade(driver-iptables,
+ ipdef-address,
+ prefix,
+ forwardIf,
+ networkLocalMulticast)  0) {
+if (forwardIf)
+virReportError(VIR_ERR_SYSTEM_ERROR,
+   _(failed to add iptables rule to prevent local 
multicast masquerading on %s),
+   forwardIf);
+else
+virReportError(VIR_ERR_SYSTEM_ERROR, %s,
+   _(failed to add iptables rule to prevent local 
multicast masquerading));
+goto masqerr7;
+}
+
 return 0;
 
+ masqerr7:
+iptablesRemoveForwardDontMasquerade(driver-iptables,
+ipdef-address,
+prefix,
+forwardIf,
+

Re: [libvirt] [PATCH] Expose all CPU features in host definition

2013-05-27 Thread Don Dugger
On Sat, May 25, 2013 at 11:45:13PM +0200, Martin Kletzander wrote:
 On 05/25/2013 12:44 AM, Don Dugger wrote:
  I've opened BZ 697141 on this as I would consider it more
  a bug than a feature request.  Anyway, to re-iterate my
  rationale from the BZ:
  
  
  The virConnectGetCapabilities API describes the host capabilities
  by returning an XML description that includes the CPU model name
  and a set of CPU features.  The problem is that any features that
  are part of the CPU model are not explicitly listed, they are
  assumed to be part of the definition of that CPU model.  This
  makes it extremely difficult for the caller of this API to check
  for the presence of a specific CPU feature, the caller would have
  to know what features are part of which CPU models, a very
  daunting task.
  
  This patch solves this problem by having the API return a model
  name, as it currently does, but it will also explicitly list all
  of the CPU features that are present.  This would make it much
  easier for a caller of this API to check for specific features.
  
  Signed-off-by: Don Dugger donald.d.dug...@intel.com
  
 
 I'm generally not against exposing CPU model features in capabilities,
 but if we do this, such features should be distinguishable from those
 not in the model.  Of course we don't want users to go to
 /usr/share/libvirt/cpu_map.xml all the time, but maybe there could be
 separate API for that.  If not, then it should be encapsulated somewhere
 else than side by side the other features.

I guess I don't understand why there's a need to distinguish between
features in a model and other features.  The important bits are the
actual features themselves.  A model is a convenient shorthand for
a set of features but that's all it is, a shorthand.  The real
information is the specific features that are present on that CPU.

Knowing that a CPU is a Westmere model is interesting but what I
really want to know is whether the CPU supports SSE3 instructions
so that I know this is an appropriate CPU to be running my
multimedia application on.  Listing all the features provides that
info in an easy to consume fashion.

 
  ---
   src/cpu/cpu_x86.c |   30 ++
   1 file changed, 30 insertions(+)
  
  diff --git a/src/cpu/cpu_x86.c b/src/cpu/cpu_x86.c
  index 5d479c2..b2e16df 100644
  --- a/src/cpu/cpu_x86.c
  +++ b/src/cpu/cpu_x86.c
  @@ -1296,6 +1296,35 @@ x86GuestData(virCPUDefPtr host,
   return x86Compute(host, guest, data, message);
   }
   
  +static void
  +x86AddFeatures(virCPUDefPtr cpu,
  +  struct x86_map *map)
  +{
  +const struct x86_model *candidate;
  +const struct x86_feature *feature = map-features;
  +
  +candidate = map-models;
  +while (candidate != NULL) {
  +   if (STREQ(cpu-model, candidate-name))
 
 Don't indent with TABs, there's even a 'make syntax-check' rule for that.

I was raised that tabs are the one true indention style but,
if the libvirt code base prefers spaces, I'll bite my tongue
and do it :-)

 
  +   break;
  +   candidate = candidate-next;
  +}
  +if (!candidate) {
  +   VIR_WARN(Odd, %s not a known CPU model\n, cpu-model);
  +   return;
 
 Warning seems inappropriate here as this is actually an error.

Agreed.

 
  +}
  +while (feature != NULL) {
  +   if (x86DataIsSubset(candidate-data, feature-data)) {
  +   if (virCPUDefAddFeature(cpu, feature-name, 
  VIR_CPU_FEATURE_DISABLE)  0) {
  +   VIR_WARN(CPU model %s, no room for feature %s, cpu-model, 
  feature-name);
  +   return;
 
 This code path shadows an error and means the feature will not be
 mentioned in the capabilities, but the API will end successfully.

I was trying to not through fatal errors but, on reflection, I think
I agree here also.  I'll spin a new patch incorporating these
suggestions.

 
 Martin
 

-- 
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
n0...@n0ano.com
Ph: 303/443-3786

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list