Re: [libvirt] [libvirt-users] [RFC] per-device metadata

2017-03-21 Thread Francesco Romani
(CC devel list - better late than never)


On 03/16/2017 04:23 PM, Daniel P. Berrange wrote:
> On Thu, Mar 16, 2017 at 04:17:51PM +0100, Peter Krempa wrote:
>> On Thu, Mar 16, 2017 at 14:52:47 +, Daniel Berrange wrote:
>>> On Thu, Mar 16, 2017 at 03:50:51PM +0100, Peter Krempa wrote:
>>>> On Thu, Mar 16, 2017 at 14:42:30 +, Daniel Berrange wrote:
>>>>> On Thu, Mar 16, 2017 at 01:46:38PM +0100, Peter Krempa wrote:
>>>>>> On Mon, Feb 27, 2017 at 16:41:28 +0100, Francesco Romani wrote:
>> [...]
>>
>>> The scenario where device attach fails is not the problem - you can
>>> get the same level of reliabilty to that by simply updating the
>>> global metadata before & after hotplug in the same way. What is
>>> difficult is when libvirt fails to persist the XML config on disk
>>> or when libvirt crashes part way through the operation, and other
>>> akward failure scenarios unrelated to QEMU itself.
>> In that case you lose the device definition too, since saving the XML is
>> the integral part of the hotplug operation.
> Agreed, but that just re-inforces my view that we don't need to provide
> extra metadata against the device for sake of atomicity. Even the existing
> hotplug doesn't guarantee any kind of atomicity, so you're not making life
> worse by performing a separate API call to update the global metadata.
>
>
> Regards,
> Daniel

Thanks everyone,  I will file a bug so we can move forward on this.

Bests,

-- 
Francesco Romani
Red Hat Engineering Virtualization R & D
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC PATCHv2 1/8] threshold: new API virDomainBlockSetWriteThreshold

2015-06-16 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Peter Krempa pkre...@redhat.com
 Cc: libvir-list@redhat.com, from...@redhat.com
 Sent: Monday, June 15, 2015 6:21:13 PM
 Subject: Re: [libvirt] [RFC PATCHv2 1/8] threshold: new API   
 virDomainBlockSetWriteThreshold
 
 On 06/15/2015 07:19 AM, Peter Krempa wrote:
  On Fri, Jun 12, 2015 at 13:29:25 -0600, Eric Blake wrote:
  qemu 2.3 added a new QMP command block-set-write-threshold,
  which allows callers to get an interrupt when a file hits a
  write threshold, rather than the current approach of repeatedly
  polling for file allocation.  This patch prepares the API for
  callers to register to receive the event, as well as a way
  to query the threshold via virDomainListGetStats().
 
 
  +
  +typedef enum {
  +/* threshold is thousandth of a percentage (0 to 10) relative to
  
  You managed to choose a unusual unit. Commonly used ones are 1/1000 and
  1/1 000 000. Financial world also uses 1/10 000. Your unit of 1/100 000
  is not among:
  
  https://en.wikipedia.org/wiki/Parts-per_notation#Parts-per_expressions
  
  I'd again suggest to use 1/1 000 000. Or if you want to be uber preciese
  you might choose 1/(2^64 - 1).
 
 Francesco, what precision would you like?  Parts per million seems okay
 to me, if we want an order of magnitude closer; and I don't think we
 need anything beyond that.  Or if parts per thousand is sufficient, that
 leads to smaller numbers on input.  But it's pretty trivial for me to
 adjust the code to a different base, for whatever people would like.

We (in oVirt) use very coarse thresholds.
For our current needs, I believe even parts per thousand is sufficient.
Trying to be a bit forward thinking, I believe parts per million is perfectly 
fine.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC: exposing qemu's block-set-write-threshold

2015-05-22 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com
 Cc: libvir-list@redhat.com, Nir Soffer nsof...@redhat.com, Peter Krempa 
 pkre...@redhat.com,
 qemu-de...@nongnu.org
 Sent: Friday, May 22, 2015 6:33:01 AM
 Subject: Re: [libvirt] RFC: exposing qemu's block-set-write-threshold
 
 [adding qemu]
 

  I read the thread and I'm pretty sure this will be a silly question, but I
  want
  to make sure I am on the same page and I'm not somehow confused by the
  terminology.
  
  Let's consider the simplest of the situation we face in oVirt:
  
  (thin provisioned qcow2 disk on LV)
  
  vda=[format=qcow2] - lv=[path=/dev/mapper/$UUID]
  
  Isn't the LV here the 'backing file' (actually, backing block device) of
  the disk?
 
 Restating what you wrote into libvirt terminology, I think this means

 that you have a disk where:
 driver is qcow2
 source is a local file name
 device names vda
 backingStore index='1' describes the backing LV:
   driver is also qcow2 (as polling allocation growth in order to
 resize on demand only makes sense for qcow2 format)
   source is /dev/mapper/$UUID


Yes, exactly my point. I just want to be 100% sure that the three (slightly) 
different
parlances of the three groups (oVirt/libvirt/QEMU) are aligned on the same 
meaning,
and that we're not getting anything lost in translation

 that you have a disk where:
 driver is qcow2
 source is a local file name
 device names vda
 backingStore index='1' describes the backing LV:
   driver is also qcow2 (as polling allocation growth in order to
 resize on demand only makes sense for qcow2 format)
   source is /dev/mapper/$UUID

For the final confirmation, here's the actual XML we produce:

disk device=disk snapshot=no type=block
  address bus=0x00 domain=0x function=0x0 slot=0x05 type=pci/
  source 
dev=/rhev/data-center/0002-0002-0002-0002-014b/12f68692-2a5a-4e48-af5e-4679bca7fd44/images/ee1295ee-7ddc-4030-be5e-4557538bc4d2/05a88a94-5bd6-4698-be69-39e78c84e1a5/
  target bus=virtio dev=vda/
  serialee1295ee-7ddc-4030-be5e-4557538bc4d2/serial
  boot order=1/
  driver cache=none error_policy=stop io=native name=qemu 
type=qcow2/
/disk

For the sake of completeness:

$ ls -lh 
/rhev/data-center/0002-0002-0002-0002-014b/12f68692-2a5a-4e48-af5e-4679bca7fd44/images/ee1295ee-7ddc-4030-be5e-4557538bc4d2/05a88a94-5bd6-4698-be69-39e78c84e1a5
 
lrwxrwxrwx. 1 vdsm kvm 78 May 22 08:49 
/rhev/data-center/0002-0002-0002-0002-014b/12f68692-2a5a-4e48-af5e-4679bca7fd44/images/ee1295ee-7ddc-4030-be5e-4557538bc4d2/05a88a94-5bd6-4698-be69-39e78c84e1a5
 - 
/dev/12f68692-2a5a-4e48-af5e-4679bca7fd44/05a88a94-5bd6-4698-be69-39e78c84e1a5

$ ls -lh /dev/12f68692-2a5a-4e48-af5e-4679bca7fd44/
total 0
lrwxrwxrwx. 1 root root 8 May 22 08:49 05a88a94-5bd6-4698-be69-39e78c84e1a5 - 
../dm-11
lrwxrwxrwx. 1 root root 8 May 22 08:49 54673e6d-207d-4a66-8f0d-3f5b3cda78e5 - 
../dm-12
lrwxrwxrwx. 1 root root 9 May 22 08:49 ids - ../dm-606
lrwxrwxrwx. 1 root root 9 May 22 08:49 inbox - ../dm-607
lrwxrwxrwx. 1 root root 9 May 22 08:49 leases - ../dm-605
lrwxrwxrwx. 1 root root 9 May 22 08:49 master - ../dm-608
lrwxrwxrwx. 1 root root 9 May 22 08:49 metadata - ../dm-603
lrwxrwxrwx. 1 root root 9 May 22 08:49 outbox - ../dm-604

lvs | grep 05a88a94
  05a88a94-5bd6-4698-be69-39e78c84e1a5 12f68692-2a5a-4e48-af5e-4679bca7fd44 
-wi-ao  14.12g

 
 then indeed, vda is the local qcow2 file, and vda[1] is the backing
 file on the LV storage.
 
 Normally, you only care about the write threshold at the active layer
 (the local file, with name vda), because that is the only image that
 will normally be allocating sectors.  But in the case of active commit,
 where you are taking the thin-provisioned local file and writing its
 clusters back into the backing LV, the action of commit can allocate
 sectors in the backing file. 

Right

 Thus, libvirt wants to let you set a
 write-threshold on both parts of the backing chain (the active wrapper,
 and the LV backing file), where the event could fire on either node
 first.  The existing libvirt virConnectDomainGetAllStats() can already
 be used to poll allocation growth (the block.N.allocation statistic in
 libvirt, or 'virtual-size' in QMP's 'ImageInfo'), but the event would
 let you drop polling.

Yes, exactly the intent

 However, while starting to code the libvirt side of things, I've hit a
 couple of snags with interacting with the qemu design.  First, the
 'block-set-write-threshold' command is allowed to set a threshold by
 'node-name' (any BDS, whether active or backing),

Yes, this emerged during the review of my patch. 
I first took the simplest approach (probably simplistic, in retrospect),
but -IIRC- was pointed out that setting by node-name grants the most
flexible approach, hence was required.

See:
http://lists.nongnu.org/archive/html/qemu-devel/2014-11/msg02503.html
http://lists.nongnu.org/archive/html/qemu-devel/2014-11/msg02580.html
http

Re: [libvirt] RFC: exposing qemu's block-set-write-threshold

2015-05-21 Thread Francesco Romani
 allow automatic rearming of the event, which
is even cooler ;)

  Of course, I'd want virConnectGetAllDomainStats() to list the current
  threshold setting (0 if no threshold or if the event has already fired,
  non-zero if the threshold is still set waiting to fire), so that clients
  can query thresholds for multiple domains and multiple disks per domain
  in one API call.  But I don't know if we have any good way to set

Looks nice

  multiple thresholds in one call (at least virDomainSetBlockIoTune must
  be called once per disk; it might be possible for my proposed
  virDomainBlockStatsFlags() to set a threshold for multiple disks if the
  disk name is passed as NULL - but then we're back to the question of
  what happens if the guest has multiple disks of different sizes; it's
  better to set per-disk thresholds than to assume all disks must be at
  the same byte or percentage threshold).
  
  That is just usage-sugar for the users. I'd rather avoid doing this on
  multiple disks simultaneously.
 
 Good - then I won't worry about it; the new API will make disk name
 mandatory.  (Setting to a percentage or to a relative-to-tail might make
 more sense across multiple disks, but on the other hand, setting a
 threshold will be a rare thing; and while first starting the domain has
 to set a threshold on all disks, later re-arming of the trigger will be
 on one disk at a time as events happen; making the startup case more
 efficient is not going to be the bottleneck in management).

I agree

  I'm also worried about what happens across libvirtd restarts - if the
  qemu event fires while libvirtd is unconnected, should libvirt be
  tracking that a threshold was registered in the XML, and upon
  reconnection check if qemu still has the threshold?  If qemu no longer
  has a threshold, then libvirt can assume it missed the event, and
  generate one as part of reconnecting to the domain.
  
  Libvirt should have enough information to actually check if the event
  happened and should be able to decide that it in fact missed the event
  and it should be emitted by libvirt.

That would be awesome.
There are flows (live storage migration?) on which we'll probably still
need to poll disks, but definitely the more we (libvirt API consumer) can
depend on reliable delivery of the event, the better.

The point here is to avoid racy checks in the management application as much
as it is possible.

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] qemu: bulk stats: implement (cpu) tune group.

2015-03-09 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com
 Cc: libvir-list@redhat.com
 Sent: Monday, March 9, 2015 2:16:34 PM
 Subject: Re: [libvirt] [PATCH v2] qemu: bulk stats: implement (cpu) tune 
 group.

[...]
  + * VIR_DOMAIN_STATS_TUNE_CPU: Return CPU tuning statistics
  + * and usage information.
  + * The typed parameter keys are in this format:
  + * tune.vcpu.quota - max allowed bandwidth, in microseconds, as
  + * long long integer. -1 means 'infinite'.
  + * tune.vcpu.period - timeframe on which the virtual cpu quota is
  + *  enforced, in microseconds, as unsigned long long.
  + * tune.emu.quota - max allowed bandwidth for emulator threads,
  + *in microseconds, as long long integer.
  + *-1 means 'infinite'.
  + * tune.emu.period - timeframe on which the emulator quota is
  + * enforced, in microseconds, as unsigned long long.
  + * tune.cpu.shares - weight of this domain. This value is meaningful
  + * only if compared with the other values of
  + * the running domains. Expressed as unsigned long
  long.
  + *
 
 These options above represent configuration and not any statistic value,
 so they won't change unless libvirt is instructed to change them. I
 don't think they belong to the stats API.
 
 Additionally libvirt recently added an event to track change of the
 tunables. See virConnectDomainEventTunableCallback.
 
 http://libvirt.org/html/libvirt-libvirt-domain.html#virConnectDomainEventTunableCallback
 http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_DOMAIN_TUNABLE_CPU_CPU_SHARES
 http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_DOMAIN_TUNABLE_CPU_EMULATORPIN
 and so on ...
 
 I think you want to use that event and leave this api for statistics
 only.

Yep, it seems that should serve us better. Thanks!


-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2] qemu: bulk stats: implement (cpu) tune group.

2015-03-06 Thread Francesco Romani
Management applications, like oVirt, may need to setup cpu quota
limits to enforce QoS for domains.

For this purpose, management applications also need to check how
domains are behaving with respect to CPU quota. This data is available
using the virDomainGetSchedulerParameters API.

This patch adds a new group to bulk stats API to obtain the same
information.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1191428
---
 include/libvirt/libvirt-domain.h |  1 +
 src/libvirt-domain.c | 16 
 src/qemu/qemu_driver.c   | 84 
 tools/virsh-domain-monitor.c |  7 
 tools/virsh.pod  | 10 -
 5 files changed, 117 insertions(+), 1 deletion(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index a9d3efd..a283f93 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -1723,6 +1723,7 @@ typedef enum {
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
+VIR_DOMAIN_STATS_TUNE_CPU = (1  6), /* return domain CPU tuning info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 89d1eab..b451299 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -11004,6 +11004,22 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * block.num.physical - physical size in bytes of the container of the
  *  backing image as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_TUNE_CPU: Return CPU tuning statistics
+ * and usage information.
+ * The typed parameter keys are in this format:
+ * tune.vcpu.quota - max allowed bandwidth, in microseconds, as
+ * long long integer. -1 means 'infinite'.
+ * tune.vcpu.period - timeframe on which the virtual cpu quota is
+ *  enforced, in microseconds, as unsigned long long.
+ * tune.emu.quota - max allowed bandwidth for emulator threads,
+ *in microseconds, as long long integer.
+ *-1 means 'infinite'.
+ * tune.emu.period - timeframe on which the emulator quota is
+ * enforced, in microseconds, as unsigned long long.
+ * tune.cpu.shares - weight of this domain. This value is meaningful
+ * only if compared with the other values of
+ * the running domains. Expressed as unsigned long long.
+ *
  * Note that entire stats groups or individual stat fields may be missing from
  * the output in case they are not supported by the given hypervisor, are not
  * applicable for the current state of the guest domain, or their retrieval
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ffa4e19..a810fa5 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -18768,6 +18768,89 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 
 #undef QEMU_ADD_COUNT_PARAM
 
+
+#define QEMU_ADD_PARAM_LL(record, maxparams, name, value) \
+do { \
+if (virTypedParamsAddLLong((record)-params, \
+   (record)-nparams, \
+   maxparams, \
+   name, \
+   value)  0) \
+goto cleanup; \
+} while (0)
+
+#define QEMU_ADD_PARAM_ULL(record, maxparams, name, value) \
+do { \
+if (virTypedParamsAddULLong((record)-params, \
+(record)-nparams, \
+maxparams, \
+name, \
+value)  0) \
+goto cleanup; \
+} while (0)
+
+static int
+qemuDomainGetStatsCpuTune(virQEMUDriverPtr driver,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+int ret = -1;
+unsigned long long shares = 0;
+qemuDomainObjPrivatePtr priv = dom-privateData;
+virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
+
+if (!cfg-privileged ||
+!virCgroupHasController(priv-cgroup, VIR_CGROUP_CONTROLLER_CPU)) {
+ret = 0;
+goto cleanup;
+}
+
+if (virCgroupGetCpuShares(priv-cgroup, shares)  0) {
+ret = 0;
+goto cleanup;
+}
+
+QEMU_ADD_PARAM_ULL(record, maxparams, tune.cpu.shares, shares);
+
+if (virCgroupSupportsCpuBW(priv-cgroup)) {
+unsigned long long period = 0;
+long long quota = 0;
+unsigned long long emulator_period = 0;
+long long emulator_quota = 0;
+int err;
+
+err = qemuGetVcpusBWLive(dom, period, quota);
+if (!err) {
+QEMU_ADD_PARAM_ULL(record, maxparams,
+   tune.vcpu.period, period);
+ 

Re: [libvirt] [PATCH v2] qemu: bulk stats: implement (cpu) tune group.

2015-03-06 Thread Francesco Romani
Version 2 addresses the comments received in v1.

Please note that the the typed parameter keys are not yet changed.

- Original Message -
 From: Francesco Romani from...@redhat.com
 To: libvir-list@redhat.com
 Cc: Francesco Romani from...@redhat.com
 Sent: Friday, March 6, 2015 3:22:50 PM
 Subject: [libvirt] [PATCH v2] qemu: bulk stats: implement (cpu) tune group.
 
 Management applications, like oVirt, may need to setup cpu quota
 limits to enforce QoS for domains.
 
 For this purpose, management applications also need to check how
 domains are behaving with respect to CPU quota. This data is available
 using the virDomainGetSchedulerParameters API.
 
 This patch adds a new group to bulk stats API to obtain the same
 information.
 
 Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1191428
 ---
  include/libvirt/libvirt-domain.h |  1 +
  src/libvirt-domain.c | 16 
  src/qemu/qemu_driver.c   | 84
  
  tools/virsh-domain-monitor.c |  7 
  tools/virsh.pod  | 10 -
  5 files changed, 117 insertions(+), 1 deletion(-)
 
 diff --git a/include/libvirt/libvirt-domain.h
 b/include/libvirt/libvirt-domain.h
 index a9d3efd..a283f93 100644
 --- a/include/libvirt/libvirt-domain.h
 +++ b/include/libvirt/libvirt-domain.h
 @@ -1723,6 +1723,7 @@ typedef enum {
  VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
  VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info
  */
  VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
 +VIR_DOMAIN_STATS_TUNE_CPU = (1  6), /* return domain CPU tuning info
 */
  } virDomainStatsTypes;
  
  typedef enum {
 diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
 index 89d1eab..b451299 100644
 --- a/src/libvirt-domain.c
 +++ b/src/libvirt-domain.c
 @@ -11004,6 +11004,22 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
   * block.num.physical - physical size in bytes of the container of the
   *  backing image as unsigned long long.
   *
 + * VIR_DOMAIN_STATS_TUNE_CPU: Return CPU tuning statistics
 + * and usage information.
 + * The typed parameter keys are in this format:
 + * tune.vcpu.quota - max allowed bandwidth, in microseconds, as
 + * long long integer. -1 means 'infinite'.
 + * tune.vcpu.period - timeframe on which the virtual cpu quota is
 + *  enforced, in microseconds, as unsigned long long.
 + * tune.emu.quota - max allowed bandwidth for emulator threads,
 + *in microseconds, as long long integer.
 + *-1 means 'infinite'.
 + * tune.emu.period - timeframe on which the emulator quota is
 + * enforced, in microseconds, as unsigned long long.
 + * tune.cpu.shares - weight of this domain. This value is meaningful
 + * only if compared with the other values of
 + * the running domains. Expressed as unsigned long long.
 + *
   * Note that entire stats groups or individual stat fields may be missing
   from
   * the output in case they are not supported by the given hypervisor, are
   not
   * applicable for the current state of the guest domain, or their retrieval
 diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
 index ffa4e19..a810fa5 100644
 --- a/src/qemu/qemu_driver.c
 +++ b/src/qemu/qemu_driver.c
 @@ -18768,6 +18768,89 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
  
  #undef QEMU_ADD_COUNT_PARAM
  
 +
 +#define QEMU_ADD_PARAM_LL(record, maxparams, name, value) \
 +do { \
 +if (virTypedParamsAddLLong((record)-params, \
 +   (record)-nparams, \
 +   maxparams, \
 +   name, \
 +   value)  0) \
 +goto cleanup; \
 +} while (0)
 +
 +#define QEMU_ADD_PARAM_ULL(record, maxparams, name, value) \
 +do { \
 +if (virTypedParamsAddULLong((record)-params, \
 +(record)-nparams, \
 +maxparams, \
 +name, \
 +value)  0) \
 +goto cleanup; \
 +} while (0)
 +
 +static int
 +qemuDomainGetStatsCpuTune(virQEMUDriverPtr driver,
 +  virDomainObjPtr dom,
 +  virDomainStatsRecordPtr record,
 +  int *maxparams,
 +  unsigned int privflags ATTRIBUTE_UNUSED)
 +{
 +int ret = -1;
 +unsigned long long shares = 0;
 +qemuDomainObjPrivatePtr priv = dom-privateData;
 +virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
 +
 +if (!cfg-privileged ||
 +!virCgroupHasController(priv-cgroup, VIR_CGROUP_CONTROLLER_CPU)) {
 +ret = 0;
 +goto cleanup;
 +}
 +
 +if (virCgroupGetCpuShares(priv-cgroup, shares)  0) {
 +ret = 0;
 +goto cleanup

Re: [libvirt] [PATCH] Ignore listen attribute of graphics for type network listens

2015-02-26 Thread Francesco Romani
- Original Message -
 From: Laine Stump la...@laine.org
 To: libvir-list@redhat.com
 Cc: Francesco Romani from...@redhat.com, Ján Tomko jto...@redhat.com
 Sent: Thursday, February 26, 2015 4:28:13 PM
 Subject: Re: [libvirt] [PATCH] Ignore listen attribute of graphics for type 
 network listens
 
 On 02/26/2015 10:11 AM, Francesco Romani wrote:
  Hi,

[...]
  When we connect to display through oVirt, before to fire up we set up the
  password through virDomainUpdateDeviceFlags. This fails for me on libvirt
  1.2.13
  from master with Commit 6992994
 
  Let me summarize the flow:
 
  we create and feed createXML with:
 
  graphics autoport=yes keymap=en-us passwd=*
  passwdValidTo=1970-01-01T00:00:01 port=-1 tlsPort=-1
  type=spice
  channel mode=secure name=main/
  channel mode=secure name=inputs/
  channel mode=secure name=cursor/
  channel mode=secure name=playback/
  channel mode=secure name=record/
  channel mode=secure name=display/
  channel mode=secure name=usbredir/
  channel mode=secure name=smartcard/
  listen network=vdsm-ovirtmgmt type=network/
  /graphics
 
  we get back using domainGetXmlDesc:
 
 This is where the error is caused - you need to add the inactive flag to
 this call

Sorry, I forgot to point out that oVirt uses transient domains, so to use the
inactive flag is surprising to me. I'm not sure that will work for us.
I need to check the docs and play with it for a bit.

Thanks,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] Ignore listen attribute of graphics for type network listens

2015-02-26 Thread Francesco Romani
Hi,

- Original Message -
 From: Laine Stump la...@laine.org
 To: libvir-list@redhat.com
 Cc: Ján Tomko jto...@redhat.com, from...@redhat.com
 Sent: Thursday, February 26, 2015 3:57:22 PM
 Subject: Re: [PATCH] Ignore listen attribute of graphics for type network 
 listens
 
 On 02/26/2015 08:53 AM, Ján Tomko wrote:
  Commit 6992994 started filling the listen attribute
  of the parent graphics elements from type='network' listens.
 
  When this XML is passed to UpdateDevice, parsing fails:
  XML error: graphics listen attribute 10.20.30.40 must match
  address attribute of first listen element (found none)
 
 
 Note that the listen attribute of graphics won't be filled in if you
 request the *inactive* xml, and so there will be no error when it is fed
 back to updatedevice. I can't think of any examples right now, but have
 a very definite memory that there are several items in the config like
 this - if you feed the status xml output back to update/define you're
 gonna have a bad time. That's why virsh edit asks for the INACTIVE xml.
 
 Did you see this when trying to do an update-device manually, or as the
 result of some management application that has forgotten to add the
 INACTIVE flag to its request for XML?

I don't know about Jan, but I experienced this failure myself with a quite
recent libvirt snapshot (around Feb 20, so with the aforementioned commit)
using oVirt/VDSM.

When we connect to display through oVirt, before to fire up we set up the
password through virDomainUpdateDeviceFlags. This fails for me on libvirt 1.2.13
from master with Commit 6992994

Let me summarize the flow:

we create and feed createXML with:

graphics autoport=yes keymap=en-us passwd=* 
passwdValidTo=1970-01-01T00:00:01 port=-1 tlsPort=-1 type=spice
channel mode=secure name=main/
channel mode=secure name=inputs/
channel mode=secure name=cursor/
channel mode=secure name=playback/
channel mode=secure name=record/
channel mode=secure name=display/
channel mode=secure name=usbredir/
channel mode=secure name=smartcard/
listen network=vdsm-ovirtmgmt type=network/
/graphics

we get back using domainGetXmlDesc:

graphics type='spice' tlsPort='5900' autoport='yes' listen='192.168.1.51' 
keymap='en-us' passwd='*' passwdValidTo='1970-01-01T00:00:01'
  listen type='network' address='192.168.1.51' network='vdsm-ovirtmgmt'/
  channel name='main' mode='secure'/
  channel name='display' mode='secure'/
  channel name='inputs' mode='secure'/
  channel name='cursor' mode='secure'/
  channel name='playback' mode='secure'/
  channel name='record' mode='secure'/
  channel name='smartcard' mode='secure'/
  channel name='usbredir' mode='secure'/
/graphics

we try to feed using virDomainUpdateDeviceFlags:

graphics autoport=yes connected=disconnect keymap=en-us 
listen=192.168.1.51 passwd=SOMETHING passwdValidTo=2015-02-26T09:56:40 
tlsPort=5900 type=spice
  listen address=192.168.1.51 network=vdsm-ovirtmgmt type=network/
  channel mode=secure name=main/
  channel mode=secure name=display/
  channel mode=secure name=inputs/
  channel mode=secure name=cursor/
  channel mode=secure name=playback/
  channel mode=secure name=record/
  channel mode=secure name=smartcard/
  channel mode=secure name=usbredir/
/graphics

it chokes out with

  File /usr/lib64/python2.7/site-packages/libvirt.py, line 2647, in 
updateDeviceFlags
if ret == -1: raise libvirtError ('virDomainUpdateDeviceFlags() failed', 
dom=self)
libvirtError: XML error: graphics listen attribute 192.168.1.51 must match 
address attribute of first listen element (found none)

We use flags=0 here:

self._dom.updateDeviceFlags(graphics.toxml(), 0)

(_dom is a virDomain, graphics.toxml() produces the above)

Now, I understand that here we should use a proper flag, but I'm
confused on which should we use. The docs

http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainUpdateDeviceFlags

doesn't seem to match with I just learned from your email:

http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainDeviceModifyFlags

Which one do we need here? VIR_DOMAIN_DEVICE_MODIFY_CONFIG?

 If the latter, then I guess I can live with ignoring the error in order
 to preserve backward compatibility with the broken application.
 Reluctant ACK.

Well, I appreciate the backward compatibility effort, but I also
definitely want to fix the oVirt code! :)


Thanks,
 
-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] qemu: bulk stats: implement (cpu) tune group.

2015-02-24 Thread Francesco Romani
Hi John, thanks for the review!

- Original Message -
 From: John Ferlan jfer...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Monday, February 23, 2015 7:33:47 PM
 Subject: Re: [libvirt] [PATCH] qemu: bulk stats: implement (cpu) tune group.
 
 
 
 On 02/11/2015 09:22 AM, Francesco Romani wrote:
  Management applications, like oVirt, may need to setup cpu quota
  limits to enforce QoS for VMs.
  
  For this purpose, management applications also need to check how
  VMs are behaving with respect to CPU quota. This data is avaialble
  using the virDomainGetSchedulerParameters API.
  
  This patch adds a new group to bulk stats API to obtain the same
  information.
  
  Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1191428
  ---
   include/libvirt/libvirt-domain.h |  1 +
   src/libvirt-domain.c | 16 
   src/qemu/qemu_driver.c   | 84
   
   tools/virsh-domain-monitor.c |  7 
   4 files changed, 108 insertions(+)
  
 
 In general looks good... There's a few spelling and spacing nits below
 which I could fix up before pushing for you...

Oops. Will fix, spell-check again and resubmit.

 You are missing 'virsh.pod' - something easily added as well.

Will add.

 The one question I have is around the switch name (looking for any other
 thoughts...)

I don't really have strong opinions here, so whatever fits best for you guys
should be fine for me.

 Should the option be cpu-tune instead of tune-cpu, especially since
 the name of the function has *CpuTune? Or even 'sched-info' to match
 the 'virsh schedinfo $dom' command?  I suppose some day there'd be
 'numa-tune' data desired as well, but that's a different issue...

I'm aware (= because we oVirt team plan/want/use them :)) of NUMA and I/O
tune information which could be requested in the future.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [python][RFC] more python bulk stats API

2015-02-24 Thread Francesco Romani
Hi,

I was wondering if there is room in libvirt-python for a couple of new APIs
which could make life easier for python developers.
It is not just a theoretical thing, when developing VDSM, part of oVirt, I
found myself repeating these patterns long enough.

1. return dict from getAllDomainStats and domainListGetStats
both bulk stats API returns on success a list of tuples, each tuple being 
(DomainReference, DictOfStats) ,
and I often find myself in need to translate them into a dict like 
{VMUUID:DictOfStats} , using some code like

def _translate(bulk_stats):
return dict((dom.UUIDString(), stats)
for dom, stats in bulk_stats)

So I'd like to suggest new APIs:

  virConnection.getAllDomainStatsMap()
  virConnection.domainListGetStatsMap()

which directly return the dict()s above; arguments should be like 
getAllDomainStats and domainListGetStats,
respectively.

2. get all the bulk stats from a single VM
it is trival and efficient to do this in C, but to do that in python means to 
use throwaway temporary lists,
one for the domain and one for the result
No big deal, but again, why to waste? I'd like to have

  virDomain.getBulkStats(stats)  # stats is like in getAllDomainStats

that should return just the DictOfStats on success.

For performance reasons, all of the above should be done in C.

Just in case, I have proof of concept code for all the above. I can post a 
tentative patch if maintainers like
these ideas.

Thanks,

Thoughts welcome

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] qemu: bulk stats: implement (cpu) tune group.

2015-02-11 Thread Francesco Romani
Management applications, like oVirt, may need to setup cpu quota
limits to enforce QoS for VMs.

For this purpose, management applications also need to check how
VMs are behaving with respect to CPU quota. This data is avaialble
using the virDomainGetSchedulerParameters API.

This patch adds a new group to bulk stats API to obtain the same
information.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1191428
---
 include/libvirt/libvirt-domain.h |  1 +
 src/libvirt-domain.c | 16 
 src/qemu/qemu_driver.c   | 84 
 tools/virsh-domain-monitor.c |  7 
 4 files changed, 108 insertions(+)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 4dbd7f5..3d8c6af 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -1700,6 +1700,7 @@ typedef enum {
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
+VIR_DOMAIN_STATS_TUNE_CPU = (1  6), /* return domain CPU tuning info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 492e90a..a4effa3 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -10990,6 +10990,22 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * block.num.physical - physical size in bytes of the container of the
  *  backing image as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_TUNE_CPU: Return CPU tuning statistics
+ * and usage information.
+ * The typed parameter keys are in this format:
+ * tune.vcpu.quota - max allowed bandwith, in microseconds, as
+ * long long integer. -1 means 'infinite'.
+ * tune.vcpu.period - timeframe on which the virtual cpu quota is
+ *  enforced, in microseconds, as unsigned long long.
+ * tune.emu.quota - max allowd bandwith for emulator threads,
+ *in microseconds, as long long integer.
+ *-1 means 'infinite'.
+ * tune.emu.period - timeframe on which the emulator quota is
+ * enforced, in microseconds, as unsigned long long.
+ * tune.cpu.shares - weight of this VM. This value is meaningful
+ * only if compared with the other values of
+ * the running vms. Expressed as unsigned long long.
+ *
  * Note that entire stats groups or individual stat fields may be missing from
  * the output in case they are not supported by the given hypervisor, are not
  * applicable for the current state of the guest domain, or their retrieval
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 26fc6a2..5548626 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -18797,6 +18797,89 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 
 #undef QEMU_ADD_COUNT_PARAM
 
+
+#define QEMU_ADD_PARAM_LL(record, maxparams, name, value) \
+do { \
+if (virTypedParamsAddLLong((record)-params, \
+   (record)-nparams, \
+   maxparams, \
+   name, \
+   value)  0) \
+goto cleanup; \
+} while (0)
+
+#define QEMU_ADD_PARAM_ULL(record, maxparams, name, value) \
+do { \
+if (virTypedParamsAddULLong((record)-params, \
+(record)-nparams, \
+maxparams, \
+name, \
+value)  0) \
+goto cleanup; \
+} while (0)
+
+static int
+qemuDomainGetStatsCpuTune(virQEMUDriverPtr driver,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+int ret = -1;
+unsigned long long shares = 0;
+qemuDomainObjPrivatePtr priv = dom-privateData;
+virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
+
+if (!cfg-privileged ||
+   !virCgroupHasController(priv-cgroup, VIR_CGROUP_CONTROLLER_CPU)) {
+ret = 0;
+goto cleanup;
+}
+
+if (virCgroupGetCpuShares(priv-cgroup, shares)  0) {
+ret = 0;
+goto cleanup;
+}
+
+QEMU_ADD_PARAM_ULL(record, maxparams, tune.cpu.shares, shares);
+
+if (virCgroupSupportsCpuBW(priv-cgroup)) {
+unsigned long long period = 0;
+long long quota = 0;
+unsigned long long emulator_period = 0;
+long long emulator_quota = 0;
+int err;
+
+err = qemuGetVcpusBWLive(dom, period, quota);
+if (!err) {
+QEMU_ADD_PARAM_ULL(record, maxparams,
+   tune.vcpu.period, period);
+QEMU_ADD_PARAM_LL(record, maxparams,
+  

Re: [libvirt] [PATCH v2] qemu: bulk stats: add pcpu placement information

2014-12-17 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Thursday, December 11, 2014 10:04:24 AM
 Subject: Re: [libvirt] [PATCH v2] qemu: bulk stats: add pcpu placement
 information
 
 On 12/11/14 08:43, Francesco Romani wrote:
  This patch adds the information about the physical cpu
  placement of virtual cpus for bulk stats.
  
  This is the only difference in output with the
  virDomainGetVcpus() API.
  Management software, like oVirt, needs this information
  to properly manage NUMA configurations.
 
 Are you sure that you are getting what you expect? When this stats group
 was first implemented I asked not to include this stat as it only shows
 the actual host cpu id where the guest cpu is running at that precise
 moment. The problem with that is that usual configurations don't map the
 cpus in a 1:1 fashion, but rather allow a specific guest CPU to be run
 on a subset of host cpus according to it's scheduling decisions.
 
 That means that the stat might oscillate in the given set where the
 guest vcpu is pinned at. Could you please share your use case for this
 one? I'm curious to see whether you have some real use of such data.

There is one use case on oVirt on which this very data is used to build
a what is claimed to be a Vcpu runtime pinning map. It is used on NUMA flow.

I'm not really familiar with that code, and by inspecting it after
your answer above, I'm not 100% convinced everything's right in oVirt.

I'll need to check more deeply, and I'll reply as soon as I have trustworthy 
information.

Thanks for the insight,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] qemu: bulk stats: add pcpu placement information

2014-12-10 Thread Francesco Romani
This patch adds the information about the physical cpu
placement of virtual cpus for bulk stats.

This is the only difference in output with the
virDomainGetVcpus() API.
Management software, like oVirt, needs this information
to properly manage NUMA configurations.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/libvirt-domain.c   | 2 ++
 src/qemu/qemu_driver.c | 9 +
 2 files changed, 11 insertions(+)

diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index cb76d8c..63fc967 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -10888,6 +10888,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  *  from virVcpuState enum.
  * vcpu.num.time - virtual cpu time spent by virtual CPU num
  * as unsigned long long.
+ * vcpu.num.physical - real CPU number on which virtual CPU num is
+ * running, or -1 if offline.
  *
  * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
  * The typed parameter keys are in this format:
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 830fca7..ab0652d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -18348,6 +18348,15 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 param_name,
 cpuinfo[i].cpuTime)  0)
 goto cleanup;
+
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH,
+ vcpu.%zu.physical, i);
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+param_name,
+cpuinfo[i].cpu)  0)
+goto cleanup;
 }
 
 ret = 0;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] qemu: bulk stats: add pcpu placement information

2014-12-10 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Wednesday, December 10, 2014 6:16:07 PM
 Subject: Re: [libvirt] [PATCH] qemu: bulk stats: add pcpu placement   
 information

  +++ b/src/libvirt-domain.c
  @@ -10888,6 +10888,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
*  from virVcpuState enum.
* vcpu.num.time - virtual cpu time spent by virtual CPU num
* as unsigned long long.
  + * vcpu.num.physical - real CPU number on which virtual CPU num is
  + * running, or -1 if offline.
 
 As which type?
 
  +if (virTypedParamsAddULLong(record-params,
  +record-nparams,
  +maxparams,
  +param_name,
  +cpuinfo[i].cpu)  0)
 
 ULLong cannot hold -1.  Is 'int' sufficient, since physical CPU numbers
 will never exceed 32 signed bits? (A machine with 2 billion cores seems
 unlikely...)

Oops, hit 'send' too early, apologies for the noise.
Fixed patch coming in a snap.

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2] qemu: bulk stats: add pcpu placement information

2014-12-10 Thread Francesco Romani
This patch adds the information about the physical cpu
placement of virtual cpus for bulk stats.

This is the only difference in output with the
virDomainGetVcpus() API.
Management software, like oVirt, needs this information
to properly manage NUMA configurations.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/libvirt-domain.c   | 2 ++
 src/qemu/qemu_driver.c | 9 +
 2 files changed, 11 insertions(+)

diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index cb76d8c..e84f6a8 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -10888,6 +10888,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  *  from virVcpuState enum.
  * vcpu.num.time - virtual cpu time spent by virtual CPU num
  * as unsigned long long.
+ * vcpu.num.physical - real CPU number on which virtual CPU num is
+ * running, as int. -1 if offline.
  *
  * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
  * The typed parameter keys are in this format:
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 830fca7..b62cabf 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -18348,6 +18348,15 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 param_name,
 cpuinfo[i].cpuTime)  0)
 goto cleanup;
+
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH,
+ vcpu.%zu.physical, i);
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].cpu)  0)
+goto cleanup;
 }
 
 ret = 0;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] qemu: bulk stats: typo in monitor handling

2014-12-10 Thread Francesco Romani
A typo in qemuConnectGetAllDomainStats makes the code
mark the monitor as available when qemuDomainObjBeginJob
fails, instead of when it succeeds, as the correct flow
requires.

This patch fixes the check and updates the code documentation
accordingly.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 830fca7..129e10c 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -18745,9 +18745,12 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 }
 
 if (HAVE_JOB(privflags) 
-qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)  0)
-/* As it was never requested. Gather as much as possible anyway. */
+qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) == 0)
 domflags |= QEMU_DOMAIN_STATS_HAVE_JOB;
+/*
+ * else: as it was never requested.
+ * Gather as much as possible anyway.
+ */
 
 if (qemuDomainGetStats(conn, dom, stats, tmp, domflags)  0)
 goto endjob;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 0/4] bulk stats: QEMU implementation polishing

2014-09-19 Thread Francesco Romani
This patchset does polishing on the QEMU bulk stats implementation.
The main issue this patchset addresses is that, unless a critical
error is found, bulk stats should be silent and ignore errors.

To do so, virResetLastError() is used in a few places, but this
is not enough since errors are logged anyway.
A better approach is to avoid to report error entirely.

The patchset is organized as follows:
- patches 1 to 3 enhances the functions used in the bulk stats
  path(s) adding a 'bool report' flag, to let the caller optionally
  suppress error reporting.
- patch 4 is a general polishing patch which reduces repetition
  of the code in the block stats collection.

Francesco Romani (4):
  qemu: make qemuDomainHelperGetVcpus silent
  make virNetInterfaceStats silent
  qemu: make qemuMonitorGetAllBlockStatsInfo silent
  qemu: json monitor: reduce duplicated code

 src/lxc/lxc_driver.c |   2 +-
 src/openvz/openvz_driver.c   |   2 +-
 src/qemu/qemu_driver.c   |  32 +-
 src/qemu/qemu_monitor.c  |  12 ++--
 src/qemu/qemu_monitor.h  |   3 +-
 src/qemu/qemu_monitor_json.c | 136 ---
 src/qemu/qemu_monitor_json.h |   3 +-
 src/util/virstats.c  |  21 ---
 src/util/virstats.h  |   3 +-
 src/xen/xen_hypervisor.c |   2 +-
 10 files changed, 111 insertions(+), 105 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 1/4] qemu: make qemuDomainHelperGetVcpus silent

2014-09-19 Thread Francesco Romani
The commit 74c066df4d8 introduced an helper to factor a code path
which is shared between the existing API and the new bulk stats API.
In the bulk stats path errors must be silenced unless critical
(e.g. memory allocation failure).

To address this need, this patch adds an argument to disable error reporting.
---
 src/qemu/qemu_driver.c | 22 +-
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index f28082f..dc8d6c3 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1381,8 +1381,10 @@ qemuGetProcessInfo(unsigned long long *cpuTime, int 
*lastCpu, long *vm_rss,
 
 
 static int
-qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo,
- unsigned char *cpumaps, int maplen)
+qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+ virVcpuInfoPtr info, int maxinfo,
+ unsigned char *cpumaps, int maplen,
+ bool report)
 {
 int maxcpu, hostcpus;
 size_t i, v;
@@ -1412,8 +1414,10 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, 
virVcpuInfoPtr info, int maxinfo,
NULL,
vm-pid,
priv-vcpupids[i])  0) {
-virReportSystemError(errno, %s,
- _(cannot get vCPU placement  pCPU 
time));
+if (report)
+virReportSystemError(errno, %s,
+ _(cannot get vCPU placement 
+pCPU time));
 return -1;
 }
 }
@@ -1440,8 +1444,9 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, 
virVcpuInfoPtr info, int maxinfo,
 virBitmapFree(map);
 }
 } else {
-virReportError(VIR_ERR_OPERATION_INVALID,
-   %s, _(cpu affinity is not available));
+if (report)
+virReportError(VIR_ERR_OPERATION_INVALID,
+   %s, _(cpu affinity is not available));
 return -1;
 }
 }
@@ -5044,7 +5049,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
 goto cleanup;
 }
 
-ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen);
+ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen, true);
 
  cleanup:
 if (vm)
@@ -17530,8 +17535,7 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 return -1;
 
 if (qemuDomainHelperGetVcpus(dom, cpuinfo, dom-def-vcpus,
- NULL, 0)  0) {
-virResetLastError();
+ NULL, 0, false)  0) {
 ret = 0; /* it's ok to be silent and go ahead */
 goto cleanup;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 3/4] qemu: make qemuMonitorGetAllBlockStatsInfo silent

2014-09-19 Thread Francesco Romani
The commit 290e3c6b07a introduced an helper to factor a code path
which is shared between the existing API and the new bulk stats API.
In the bulk stats path errors must be silenced unless critical
(e.g. memory allocation failure).

To address this need, this patch adds an argument to disable error
reporting.
---
 src/qemu/qemu_driver.c   |  4 +-
 src/qemu/qemu_monitor.c  | 12 --
 src/qemu/qemu_monitor.h  |  3 +-
 src/qemu/qemu_monitor_json.c | 87 ++--
 src/qemu/qemu_monitor_json.h |  3 +-
 5 files changed, 65 insertions(+), 44 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 3ff226f..36b96e5 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17696,12 +17696,12 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 qemuDomainObjEnterMonitor(driver, dom);
 
 nstats = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
- stats, nstats);
+ stats, nstats,
+ false);
 
 qemuDomainObjExitMonitor(driver, dom);
 
 if (nstats  0) {
-virResetLastError();
 ret = 0; /* still ok, again go ahead silently */
 goto cleanup;
 }
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 10f51c5..cac48d7 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -1765,17 +1765,21 @@ int
 qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
 const char *dev_name,
 qemuBlockStatsPtr stats,
-int nstats)
+int nstats,
+bool report)
 {
 int ret;
 VIR_DEBUG(mon=%p dev=%s, mon, dev_name);
 
 if (mon-json) {
 ret = qemuMonitorJSONGetAllBlockStatsInfo(mon, dev_name,
-  stats, nstats);
+  stats, nstats,
+  report);
 } else {
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   _(unable to query all block stats with this QEMU));
+if (report)
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(unable to query all block stats 
+ with this QEMU));
 return -1;
 }
 
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 2a43a3c..1eeb7b9 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -363,7 +363,8 @@ struct _qemuBlockStats {
 int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
 const char *dev_name,
 qemuBlockStatsPtr stats,
-int nstats)
+int nstats,
+bool report)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
 
 int qemuMonitorGetBlockStatsParamsNumber(qemuMonitorPtr mon,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 857edf1..4810bd3 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1749,7 +1749,8 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
 if (flush_total_times)
 *flush_total_times = -1;
 
-if (qemuMonitorJSONGetAllBlockStatsInfo(mon, dev_name, stats, 1) != 1)
+if (qemuMonitorJSONGetAllBlockStatsInfo(mon, dev_name,
+stats, 1, true) != 1)
 goto cleanup;
 
 *rd_req = stats.rd_req;
@@ -1777,7 +1778,8 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
 int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon,
 const char *dev_name,
 qemuBlockStatsPtr bstats,
-int nstats)
+int nstats,
+bool report)
 {
 int ret, count;
 size_t i;
@@ -1802,8 +1804,9 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
 
 devices = virJSONValueObjectGet(reply, return);
 if (!devices || devices-type != VIR_JSON_TYPE_ARRAY) {
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   _(blockstats reply was missing device list));
+if (report)
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(blockstats reply was missing device list));
 goto cleanup;
 }
 
@@ -1812,9 +1815,10 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
 virJSONValuePtr dev = virJSONValueArrayGet(devices, i);
 virJSONValuePtr stats;
 if (!dev || dev-type != VIR_JSON_TYPE_OBJECT) {
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   

[libvirt] [PATCH 2/4] make virNetInterfaceStats silent

2014-09-19 Thread Francesco Romani
virNetInterfaceStats is now used in the bulk stats path, and
in this path errors must be silenced unless critical
(e.g. memory allocation failure).

To address this need, this patch adds an argument to disable error reporting.
---
 src/lxc/lxc_driver.c   |  2 +-
 src/openvz/openvz_driver.c |  2 +-
 src/qemu/qemu_driver.c |  6 ++
 src/util/virstats.c| 21 +
 src/util/virstats.h|  3 ++-
 src/xen/xen_hypervisor.c   |  2 +-
 6 files changed, 20 insertions(+), 16 deletions(-)

diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c
index c3cd62c..2eaeb22 100644
--- a/src/lxc/lxc_driver.c
+++ b/src/lxc/lxc_driver.c
@@ -3112,7 +3112,7 @@ lxcDomainInterfaceStats(virDomainPtr dom,
 }
 
 if (ret == 0)
-ret = virNetInterfaceStats(path, stats);
+ret = virNetInterfaceStats(path, stats, true);
 else
 virReportError(VIR_ERR_INVALID_ARG,
_(Invalid path, '%s' is not a known interface), path);
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index b62273a..2b39f36 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -2010,7 +2010,7 @@ openvzDomainInterfaceStats(virDomainPtr dom,
 }
 
 if (ret == 0)
-ret = virNetInterfaceStats(path, stats);
+ret = virNetInterfaceStats(path, stats, true);
 else
 virReportError(VIR_ERR_INVALID_ARG,
_(invalid path, '%s' is not a known interface), path);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index dc8d6c3..3ff226f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9884,7 +9884,7 @@ qemuDomainInterfaceStats(virDomainPtr dom,
 }
 
 if (ret == 0)
-ret = virNetInterfaceStats(path, stats);
+ret = virNetInterfaceStats(path, stats, true);
 else
 virReportError(VIR_ERR_INVALID_ARG,
_(invalid path, '%s' is not a known interface), path);
@@ -17634,10 +17634,8 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 QEMU_ADD_NAME_PARAM(record, maxparams,
 net, i, dom-def-nets[i]-ifname);
 
-if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp)  0) {
-virResetLastError();
+if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp, false)  0)
 continue;
-}
 
 QEMU_ADD_NET_PARAM(record, maxparams, i,
rx.bytes, tmp.rx_bytes);
diff --git a/src/util/virstats.c b/src/util/virstats.c
index c4725ed..496393b 100644
--- a/src/util/virstats.c
+++ b/src/util/virstats.c
@@ -51,7 +51,8 @@
 #ifdef __linux__
 int
 virNetInterfaceStats(const char *path,
- virDomainInterfaceStatsPtr stats)
+ virDomainInterfaceStatsPtr stats,
+ bool report)
 {
 int path_len;
 FILE *fp;
@@ -115,14 +116,16 @@ virNetInterfaceStats(const char *path,
 }
 VIR_FORCE_FCLOSE(fp);
 
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   _(/proc/net/dev: Interface not found));
+if (report)
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(/proc/net/dev: Interface not found));
 return -1;
 }
 #elif defined(HAVE_GETIFADDRS)  defined(AF_LINK)
 int
 virNetInterfaceStats(const char *path,
- virDomainInterfaceStatsPtr stats)
+ virDomainInterfaceStatsPtr stats,
+ bool report)
 {
 struct ifaddrs *ifap, *ifa;
 struct if_data *ifd;
@@ -158,7 +161,7 @@ virNetInterfaceStats(const char *path,
 }
 }
 
-if (ret  0)
+if (ret  0  report)
 virReportError(VIR_ERR_INTERNAL_ERROR, %s,
_(Interface not found));
 
@@ -168,10 +171,12 @@ virNetInterfaceStats(const char *path,
 #else
 int
 virNetInterfaceStats(const char *path ATTRIBUTE_UNUSED,
- virDomainInterfaceStatsPtr stats ATTRIBUTE_UNUSED)
+ virDomainInterfaceStatsPtr stats ATTRIBUTE_UNUSED,
+ bool report)
 {
-virReportError(VIR_ERR_OPERATION_INVALID, %s,
-   _(interface stats not implemented on this platform));
+if (report)
+virReportError(VIR_ERR_OPERATION_INVALID, %s,
+   _(interface stats not implemented on this platform));
 return -1;
 }
 
diff --git a/src/util/virstats.h b/src/util/virstats.h
index d2c6b64..df993cd 100644
--- a/src/util/virstats.h
+++ b/src/util/virstats.h
@@ -26,6 +26,7 @@
 # include internal.h
 
 extern int virNetInterfaceStats(const char *path,
-virDomainInterfaceStatsPtr stats);
+virDomainInterfaceStatsPtr stats,
+bool report);
 
 #endif /* __STATS_LINUX_H__ */
diff --git a/src/xen/xen_hypervisor.c b/src/xen/xen_hypervisor.c
index d3d4aea..8b257f7 100644
--- a/src/xen/xen_hypervisor.c
+++ 

[libvirt] [PATCH 4/4] qemu: json monitor: reduce duplicated code

2014-09-19 Thread Francesco Romani
This patch replaces repetitive blocks of code with a couple
of macros for the sake of clarity.
There are no changes in behaviour.
---
 src/qemu/qemu_monitor_json.c | 129 ++-
 1 file changed, 53 insertions(+), 76 deletions(-)

diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 4810bd3..4fa72c9 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1775,6 +1775,25 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
 }
 
 
+#define QEMU_MONITOR_JSON_MALFORMED_ENTRY(kind) do { \
+if (report) \
+virReportError(VIR_ERR_INTERNAL_ERROR, \
+   _(blockstats %s entry was not  \
+ in expected format), \
+kind); \
+goto cleanup; \
+} while (0)
+
+
+#define QEMU_MONITOR_JSON_MISSING_STAT(statistic) do { \
+if (report) \
+virReportError(VIR_ERR_INTERNAL_ERROR, \
+_(cannot read %s statistic), \
+statistic); \
+goto cleanup; \
+} while (0)
+
+
 int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon,
 const char *dev_name,
 qemuBlockStatsPtr bstats,
@@ -1814,26 +1833,16 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
 for (i = 0; i  virJSONValueArraySize(devices)  count  nstats; i++) {
 virJSONValuePtr dev = virJSONValueArrayGet(devices, i);
 virJSONValuePtr stats;
-if (!dev || dev-type != VIR_JSON_TYPE_OBJECT) {
-if (report)
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   _(blockstats device entry was not 
- in expected format));
-goto cleanup;
-}
+if (!dev || dev-type != VIR_JSON_TYPE_OBJECT)
+QEMU_MONITOR_JSON_MALFORMED_ENTRY(device);
 
 /* If dev_name is specified, we are looking for a specific device,
  * so we must be stricter.
  */
 if (dev_name) {
 const char *thisdev = virJSONValueObjectGetString(dev, device);
-if (!thisdev) {
-if (report)
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   _(blockstats device entry was not 
- in expected format));
-goto cleanup;
-}
+if (!thisdev)
+QEMU_MONITOR_JSON_MALFORMED_ENTRY(device);
 
 /* New QEMU has separate names for host  guest side of the disk
  * and libvirt gives the host side a 'drive-' prefix. The passed
@@ -1847,81 +1856,44 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
 }
 
 if ((stats = virJSONValueObjectGet(dev, stats)) == NULL ||
-stats-type != VIR_JSON_TYPE_OBJECT) {
-if (report)
-virReportError(VIR_ERR_INTERNAL_ERROR, %s,
-   _(blockstats stats entry was not 
- in expected format));
-goto cleanup;
-}
+stats-type != VIR_JSON_TYPE_OBJECT)
+QEMU_MONITOR_JSON_MALFORMED_ENTRY(stats);
 
 if (virJSONValueObjectGetNumberLong(stats, rd_bytes,
-bstats-rd_bytes)  0) {
-if (report)
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _(cannot read %s statistic),
-   rd_bytes);
-goto cleanup;
-}
+bstats-rd_bytes)  0)
+QEMU_MONITOR_JSON_MISSING_STAT(rd_bytes);
+
 if (virJSONValueObjectGetNumberLong(stats, rd_operations,
-bstats-rd_req)  0) {
-if (report)
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _(cannot read %s statistic),
-rd_operations);
-goto cleanup;
-}
+bstats-rd_req)  0)
+QEMU_MONITOR_JSON_MISSING_STAT(rd_operations);
+
 if (virJSONValueObjectHasKey(stats, rd_total_time_ns) 
 (virJSONValueObjectGetNumberLong(stats, rd_total_time_ns,
- bstats-rd_total_times)  0)) {
-if (report)
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _(cannot read %s statistic),
-   rd_total_time_ns);
-goto cleanup;
-}
+ bstats-rd_total_times)  0))
+QEMU_MONITOR_JSON_MISSING_STAT(rd_total_time_ns);
+
 if (virJSONValueObjectGetNumberLong(stats, wr_bytes,
-bstats-wr_bytes)  0) {
-if (report)
-

[libvirt] [PATCHv5 3/8] qemu: bulk stats: implement balloon group

2014-09-15 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BALLOON
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  6 ++
 src/qemu/qemu_driver.c   | 38 ++
 3 files changed, 45 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index eb62f96..a5033ed 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2514,6 +2514,7 @@ struct _virDomainStatsRecord {
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
+VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index dfbd5c7..b3b71a0 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21602,6 +21602,12 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * cpu.user - user cpu time spent as unsigned long long.
  * cpu.system - system cpu time spent as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_BALLOON: Return memory balloon device information.
+ * The typed parameter keys are in this format:
+ * balloon.current - the memory in kiB currently used
+ * as unsigned long long.
+ * balloon.maximum - the maximum memory in kiB allowed
+ * as unsigned long long.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d0fad61..745b4f1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -2525,6 +2525,7 @@ static int qemuDomainSendKey(virDomainPtr domain,
 return ret;
 }
 
+
 static int qemuDomainGetInfo(virDomainPtr dom,
  virDomainInfoPtr info)
 {
@@ -17427,6 +17428,42 @@ qemuDomainGetStatsCpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+static int
+qemuDomainGetStatsBalloon(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom-privateData;
+unsigned long long cur_balloon = 0;
+int err = 0;
+
+if (dom-def-memballoon 
+dom-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE) {
+cur_balloon = dom-def-mem.max_balloon;
+} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
+cur_balloon = dom-def-mem.cur_balloon;
+} else {
+err = -1;
+}
+
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.current,
+cur_balloon)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.maximum,
+dom-def-mem.max_balloon)  0)
+return -1;
+
+return 0;
+}
 
 typedef int
 (*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver,
@@ -17444,6 +17481,7 @@ struct qemuDomainGetStatsWorker {
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, false },
+{ qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON, true },
 { NULL, 0, false }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv5 1/8] qemu: bulk stats: extend internal collection API

2014-09-15 Thread Francesco Romani
Future patches which will implement more
bulk stats groups for QEMU will need to access
the connection object.

To accomodate that, a few changes are needed:

* enrich internal prototype to pass qemu driver object.
* add per-group flag to mark if one collector needs
  monitor access or not.
* if at least one collector of the requested stats
  needs monitor access, thus we must start a query job
  for each domain. The specific collectors will
  run nested monitor jobs inside that.
* although requested, monitor could be not available.
  pass a flag to workers to signal the availability
  of monitor, in order to gather as much data as
  is possible anyway.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 62 +++---
 1 file changed, 54 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 73edda3..39e9d27 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17356,7 +17356,8 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
 
 
 static int
-qemuDomainGetStatsState(virDomainObjPtr dom,
+qemuDomainGetStatsState(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
 virDomainStatsRecordPtr record,
 int *maxparams,
 unsigned int privflags ATTRIBUTE_UNUSED)
@@ -17379,8 +17380,17 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 }
 
 
+typedef enum {
+QEMU_DOMAIN_STATS_HAVE_MONITOR = (1  0), /* QEMU monitor available */
+} qemuDomainStatsFlags;
+
+
+#define HAVE_MONITOR(flags) ((flags)  QEMU_DOMAIN_STATS_HAVE_MONITOR)
+
+
 typedef int
-(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+(*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver,
+  virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
   int *maxparams,
   unsigned int flags);
@@ -17388,11 +17398,12 @@ typedef int
 struct qemuDomainGetStatsWorker {
 qemuDomainGetStatsFunc func;
 unsigned int stats;
+bool monitor;
 };
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
-{ qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
-{ NULL, 0 }
+{ qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
+{ NULL, 0, false }
 };
 
 
@@ -17424,6 +17435,20 @@ qemuDomainGetStatsCheckSupport(unsigned int *stats,
 }
 
 
+static bool
+qemuDomainGetStatsNeedMonitor(unsigned int stats)
+{
+size_t i;
+
+for (i = 0; qemuDomainGetStatsWorkers[i].func; i++)
+if (stats  qemuDomainGetStatsWorkers[i].stats)
+if (qemuDomainGetStatsWorkers[i].monitor)
+return true;
+
+return false;
+}
+
+
 static int
 qemuDomainGetStats(virConnectPtr conn,
virDomainObjPtr dom,
@@ -17441,8 +17466,8 @@ qemuDomainGetStats(virConnectPtr conn,
 
 for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
 if (stats  qemuDomainGetStatsWorkers[i].stats) {
-if (qemuDomainGetStatsWorkers[i].func(dom, tmp, maxparams,
-  flags)  0)
+if (qemuDomainGetStatsWorkers[i].func(conn-privateData, dom, tmp,
+  maxparams, flags)  0)
 goto cleanup;
 }
 }
@@ -17481,6 +17506,8 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 int nstats = 0;
 size_t i;
 int ret = -1;
+unsigned int privflags = 0;
+unsigned int domflags = 0;
 
 if (ndoms)
 virCheckFlags(VIR_CONNECT_GET_ALL_DOMAINS_STATS_ENFORCE_STATS, -1);
@@ -17515,7 +17542,11 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 if (VIR_ALLOC_N(tmpstats, ndoms + 1)  0)
 goto cleanup;
 
+if (qemuDomainGetStatsNeedMonitor(stats))
+privflags |= QEMU_DOMAIN_STATS_HAVE_MONITOR;
+
 for (i = 0; i  ndoms; i++) {
+domflags = privflags;
 virDomainStatsRecordPtr tmp = NULL;
 
 if (!(dom = qemuDomObjFromDomain(doms[i])))
@@ -17525,12 +17556,22 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 !virConnectGetAllDomainStatsCheckACL(conn, dom-def))
 continue;
 
-if (qemuDomainGetStats(conn, dom, stats, tmp, flags)  0)
-goto cleanup;
+if (HAVE_MONITOR(domflags) 
+ qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)  0)
+/* As it was never requested. Gather as much as possible anyway. */
+domflags = ~QEMU_DOMAIN_STATS_HAVE_MONITOR;
+
+if (qemuDomainGetStats(conn, dom, stats, tmp, domflags)  0)
+goto endjob;
 
 if (tmp)
 tmpstats[nstats++] = tmp;
 
+if (HAVE_MONITOR(domflags)  !qemuDomainObjEndJob(driver, dom)) {
+dom = NULL;
+goto cleanup;
+}
+
 virObjectUnlock(dom);
 dom = NULL;
 }
@@ -17540,6 +17581,11

[libvirt] [PATCHv5 2/8] qemu: bulk stats: implement CPU stats group

2014-09-15 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  7 +++
 src/qemu/qemu_driver.c   | 41 +
 3 files changed, 49 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index c2f9d26..eb62f96 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2513,6 +2513,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
+VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index f7e5a37..dfbd5c7 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21596,6 +21596,13 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * state.reason - reason for entering given state, returned as int from
  *  virDomain*Reason enum corresponding to given state.
  *
+ * VIR_DOMAIN_STATS_CPU_TOTAL: Return CPU statistics and usage information.
+ * The typed parameter keys are in this format:
+ * cpu.time - total cpu time spent for this domain as unsigned long long.
+ * cpu.user - user cpu time spent as unsigned long long.
+ * cpu.system - system cpu time spent as unsigned long long.
+ *
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 39e9d27..d0fad61 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -96,6 +96,7 @@
 #include storage/storage_driver.h
 #include virhostdev.h
 #include domain_capabilities.h
+#include vircgroup.h
 
 #define VIR_FROM_THIS VIR_FROM_QEMU
 
@@ -17388,6 +17389,45 @@ typedef enum {
 #define HAVE_MONITOR(flags) ((flags)  QEMU_DOMAIN_STATS_HAVE_MONITOR)
 
 
+static int
+qemuDomainGetStatsCpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom-privateData;
+unsigned long long cpu_time = 0;
+unsigned long long user_time = 0;
+unsigned long long sys_time = 0;
+int err = 0;
+
+err = virCgroupGetCpuacctUsage(priv-cgroup, cpu_time);
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.time,
+cpu_time)  0)
+return -1;
+
+err = virCgroupGetCpuacctStat(priv-cgroup, user_time, sys_time);
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.user,
+user_time)  0)
+return -1;
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.system,
+sys_time)  0)
+return -1;
+
+return 0;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver,
   virDomainObjPtr dom,
@@ -17403,6 +17443,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
+{ qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, false },
 { NULL, 0, false }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv5 8/8] qemu: bulk stats: add block allocation information

2014-09-15 Thread Francesco Romani
Management software wants to be able to allocate disk space on demand.
To support this they need keep track of the space occupation
of the block device.
This information is reported by qemu as part of block stats.

This patch extend the block information in the bulk stats with
the allocation information.

To keep the same behaviour a helper is extracted from
qemuMonitorJSONGetBlockExtent in order to get per-device
allocation information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/libvirt.c|  2 +
 src/qemu/qemu_driver.c   | 18 +
 src/qemu/qemu_monitor.h  |  1 +
 src/qemu/qemu_monitor_json.c | 91 ++--
 4 files changed, 92 insertions(+), 20 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index ab10a3a..a8892a2 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21654,6 +21654,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  *  unsigned long long.
  * block.num.errors - Xen only: the 'oo_req' value as
  *unsigned long long.
+ * block.num.allocation - offset of the highest written sector
+ *as unsigned long long.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 446b04b..a34fffd 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17636,6 +17636,19 @@ do { \
 goto cleanup; \
 } while (0)
 
+#define QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ block.%zu.%s, num, name); \
+if (virTypedParamsAddULLong((record)-params, \
+(record)-nparams, \
+maxparams, \
+param_name, \
+value)  0) \
+goto cleanup; \
+} while (0)
+
 static int
 qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 virDomainObjPtr dom,
@@ -17690,6 +17703,9 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 fl.reqs, stats[i].flush_req);
 QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
 fl.times, stats[i].flush_total_times);
+
+QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, i,
+ allocation, stats[i].wr_highest_offset);
 }
 
 ret = 0;
@@ -17701,6 +17717,8 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 
 #undef QEMU_ADD_BLOCK_PARAM_LL
 
+#undef QEMU_ADD_BLOCK_PARAM_ULL
+
 #undef QEMU_ADD_NAME_PARAM
 
 #undef QEMU_ADD_COUNT_PARAM
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 6eb0036..d0f0bf5 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -358,6 +358,7 @@ struct _qemuBlockStats {
 long long wr_total_times;
 long long flush_req;
 long long flush_total_times;
+unsigned long long wr_highest_offset;
 };
 
 int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index d45c41f..1948b85 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1774,6 +1774,40 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
 }
 
 
+typedef enum {
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_OK,
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOPARENT,
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOSTATS,
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOOFFSET
+} qemuMonitorBlockExtentError;
+
+
+static int
+qemuMonitorJSONDevGetBlockExtent(virJSONValuePtr dev,
+ unsigned long long *extent)
+{
+virJSONValuePtr stats;
+virJSONValuePtr parent;
+
+if ((parent = virJSONValueObjectGet(dev, parent)) == NULL ||
+parent-type != VIR_JSON_TYPE_OBJECT) {
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOPARENT;
+}
+
+if ((stats = virJSONValueObjectGet(parent, stats)) == NULL ||
+stats-type != VIR_JSON_TYPE_OBJECT) {
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOSTATS;
+}
+
+if (virJSONValueObjectGetNumberUlong(stats, wr_highest_offset,
+ extent)  0) {
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOOFFSET;
+}
+
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_OK;
+}
+
+
 int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon,
 const char *dev_name,
 qemuBlockStatsPtr bstats,
@@ -1910,6 +1944,9 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
 goto cleanup;
 }
 
+/* it's ok to not have this information here. Just skip silently. */
+qemuMonitorJSONDevGetBlockExtent(dev, bstats-wr_highest_offset);
+
 count++;
 bstats++;
 
@@ -2005,6 +2042,36 @@ int

[libvirt] [PATCHv5 0/8] bulk stats: QEMU implementation

2014-09-15 Thread Francesco Romani
This patchset enhances the QEMU support
for the new bulk stats API to include
equivalents of these APIs:

virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus

This subset of API is the one oVirt relies on.
Scale/stress test on an oVirt test environment is in progress.

The patchset is organized as follows:
- the first patch enhances the internal stats gathering API
  to accomodate the needs of the groups which extract information
  using QEMU monitor jobs.
- the next five patches implement the bulk stats groups, extracting
  helpers where do refactoring to extract internal helpers every time
  it is feasible and convenient.
- the seventh patch enhances the virsh domstats command with options
  to use the new bulk stats.
- the last patch enhances the block stats group adding the wr_highest_offset
  information, needed by oVirt for thin provisioned disks.

ChangeLog

v5: address reviewer's comment
- Eric pointed out a possible flaw in balloon stats if QEMU monitor needs
  to be queried. A proper fix require further discussion and API changes
  (possbily just a new flag); However, since the balloon event is available
  in QEMU = 1.2, I just dropped the query and relied on the event instead.
  Support for older QEMUs will be reintroduced, if needed, with following
  patches.
- fix: per-domain monitor check and reporting. (pointed out by Peter)
- reset last error when fail silently. (pointed out by Peter)

v4: address reviewer's comment
- addressed reviewers comments (Peter, Wang Rui).
- pushed domain check into group stats functions. This follows
  the strategy to gather and report as much data as possible,
  silently skipping errors along the way.
- moved the block allocation patch to the end of the series.

v3: more polishing and fixes after first review
- addressed Eric's comments.
- squashed patches which extracts helpers with patches which
  use them.
- changed gathering strategy: now code tries to reap as much
  information as possible instead to give up and bail out with
  error. Only critical errors cause the bulk stats to fail.
- moved away from the transfer semantics. I find it error-prone
  and not flexible enough, I'd like to avoid as much as possible.
- rearranged helpers to have one single QEMU query job with
  many monitor jobs nested inside.
- fixed docs.
- implemented missing virsh domstats bits.

in v2: polishing and optimizations.
- incorporated feedback from Li Wei (thanks).
- added documentation.
- optimized block group to gather all the information with just
  one call to QEMU monitor.
- stripped to bare bones merged the 'block info' group into the
  'block' group - oVirt actually needs just one stat from there.
- reorganized the keys to be more consistent and shorter.


Francesco Romani (8):
  qemu: bulk stats: extend internal collection API
  qemu: bulk stats: implement CPU stats group
  qemu: bulk stats: implement balloon group
  qemu: bulk stats: implement VCPU group
  qemu: bulk stats: implement interface group
  qemu: bulk stats: implement block group
  virsh: add options to query bulk stats group
  qemu: bulk stats: add block allocation information

 include/libvirt/libvirt.h.in |   5 +
 src/libvirt.c|  61 +
 src/qemu/qemu_driver.c   | 530 +--
 src/qemu/qemu_monitor.c  |  26 +++
 src/qemu/qemu_monitor.h  |  21 ++
 src/qemu/qemu_monitor_json.c | 227 +-
 src/qemu/qemu_monitor_json.h |   4 +
 tools/virsh-domain-monitor.c |  35 +++
 tools/virsh.pod  |   4 +-
 9 files changed, 777 insertions(+), 136 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv5 6/8] qemu: bulk stats: implement block group

2014-09-15 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK
group of statistics.

To do so, an helper function to get the block stats
of all the disks of a domain is added.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  20 +++
 src/qemu/qemu_driver.c   |  81 ++
 src/qemu/qemu_monitor.c  |  26 +
 src/qemu/qemu_monitor.h  |  20 +++
 src/qemu/qemu_monitor_json.c | 136 +--
 src/qemu/qemu_monitor_json.h |   4 ++
 7 files changed, 245 insertions(+), 43 deletions(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 17b1b43..724314e 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2517,6 +2517,7 @@ typedef enum {
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
+VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 5534b2f..ab10a3a 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21635,6 +21635,26 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * net.num.tx.errs - transmission errors as unsigned long long.
  * net.num.tx.drop - transmit packets dropped as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_BLOCK: Return block devices statistics.
+ * The typed parameter keys are in this format:
+ * block.count - number of block devices on this domain
+ * as unsigned int.
+ * block.num.name - name of the block device num as string.
+ *  matches the name of the block device.
+ * block.num.rd.reqs - number of read requests as unsigned long long.
+ * block.num.rd.bytes - number of read bytes as unsigned long long.
+ * block.num.rd.times - total time (ns) spent on reads as
+ *  unsigned long long.
+ * block.num.wr.reqs - number of write requests as unsigned long long.
+ * block.num.wr.bytes - number of written bytes as unsigned long long.
+ * block.num.wr.times - total time (ns) spent on writes as
+ *  unsigned long long.
+ * block.num.fl.reqs - total flush requests as unsigned long long.
+ * block.num.fl.times - total time (ns) spent on cache flushing as
+ *  unsigned long long.
+ * block.num.errors - Xen only: the 'oo_req' value as
+ *unsigned long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 016499d..446b04b 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9664,6 +9664,7 @@ qemuDomainBlockStats(virDomainPtr dom,
 return ret;
 }
 
+
 static int
 qemuDomainBlockStatsFlags(virDomainPtr dom,
   const char *path,
@@ -17621,6 +17622,85 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_NET_PARAM
 
+/* expects a LL, but typed parameter must be ULL */
+#define QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ block.%zu.%s, num, name); \
+if (value = 0  virTypedParamsAddULLong((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  value)  0) \
+goto cleanup; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags)
+{
+size_t i;
+int ret = -1;
+int nstats = 0;
+qemuBlockStatsPtr stats = NULL;
+qemuDomainObjPrivatePtr priv = dom-privateData;
+
+if (!HAVE_MONITOR(privflags) || !virDomainObjIsActive(dom))
+return 0; /* it's ok, just go ahead silently */
+
+if (VIR_ALLOC_N(stats, dom-def-ndisks)  0)
+return -1;
+
+qemuDomainObjEnterMonitor(driver, dom);
+
+nstats = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
+ stats, nstats);
+
+qemuDomainObjExitMonitor(driver, dom);
+
+if (nstats  0) {
+virResetLastError();
+ret = 0; /* still ok, again go ahead silently */
+goto cleanup;
+}
+
+QEMU_ADD_COUNT_PARAM(record, maxparams, block, dom-def-ndisks);
+
+for (i = 0; i  nstats; i++) {
+QEMU_ADD_NAME_PARAM(record, maxparams,
+block, i, dom-def-disks[i]-dst

[libvirt] [PATCHv5 4/8] qemu: bulk stats: implement VCPU group

2014-09-15 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_VCPU
group of statistics.
To do so, this patch also extracts a helper to gather the
VCpu information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  12 +++
 src/qemu/qemu_driver.c   | 201 +--
 3 files changed, 150 insertions(+), 64 deletions(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index a5033ed..4b851a5 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2515,6 +2515,7 @@ typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
+VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index b3b71a0..1d91c99 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21609,6 +21609,18 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * balloon.maximum - the maximum memory in kiB allowed
  * as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_VCPU: Return virtual CPU statistics.
+ * Due to VCPU hotplug, the vcpu.num.* array could be sparse.
+ * The actual size of the array correspond to vcpu.current.
+ * The array size will never exceed vcpu.maximum.
+ * The typed parameter keys are in this format:
+ * vcpu.current - current number of online virtual CPUs as unsigned int.
+ * vcpu.maximum - maximum number of online virtual CPUs as unsigned int.
+ * vcpu.num.state - state of the virtual CPU num, as int
+ *  from virVcpuState enum.
+ * vcpu.num.time - virtual cpu time spent by virtual CPU num
+ * as unsigned long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 745b4f1..4a92f58 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1380,6 +1380,76 @@ qemuGetProcessInfo(unsigned long long *cpuTime, int 
*lastCpu, long *vm_rss,
 }
 
 
+static int
+qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo,
+ unsigned char *cpumaps, int maplen)
+{
+int maxcpu, hostcpus;
+size_t i, v;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if ((hostcpus = nodeGetCPUCount())  0)
+return -1;
+
+maxcpu = maplen * 8;
+if (maxcpu  hostcpus)
+maxcpu = hostcpus;
+
+/* Clamp to actual number of vcpus */
+if (maxinfo  priv-nvcpupids)
+maxinfo = priv-nvcpupids;
+
+if (maxinfo = 1) {
+if (info != NULL) {
+memset(info, 0, sizeof(*info) * maxinfo);
+for (i = 0; i  maxinfo; i++) {
+info[i].number = i;
+info[i].state = VIR_VCPU_RUNNING;
+
+if (priv-vcpupids != NULL 
+qemuGetProcessInfo((info[i].cpuTime),
+   (info[i].cpu),
+   NULL,
+   vm-pid,
+   priv-vcpupids[i])  0) {
+virReportSystemError(errno, %s,
+ _(cannot get vCPU placement  pCPU 
time));
+return -1;
+}
+}
+}
+
+if (cpumaps != NULL) {
+memset(cpumaps, 0, maplen * maxinfo);
+if (priv-vcpupids != NULL) {
+for (v = 0; v  maxinfo; v++) {
+unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v);
+virBitmapPtr map = NULL;
+unsigned char *tmpmap = NULL;
+int tmpmapLen = 0;
+
+if (virProcessGetAffinity(priv-vcpupids[v],
+  map, maxcpu)  0)
+return -1;
+virBitmapToData(map, tmpmap, tmpmapLen);
+if (tmpmapLen  maplen)
+tmpmapLen = maplen;
+memcpy(cpumap, tmpmap, tmpmapLen);
+
+VIR_FREE(tmpmap);
+virBitmapFree(map);
+}
+} else {
+virReportError(VIR_ERR_OPERATION_INVALID,
+   %s, _(cpu affinity is not available));
+return -1;
+}
+}
+}
+return maxinfo;
+}
+
+
 static virDomainPtr qemuDomainLookupByID(virConnectPtr conn,
  int id)
 {
@@ -4960,10 +5030,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
int maplen)
 {
 virDomainObjPtr vm;
-size_t i;
-int v, maxcpu, hostcpus;
 int ret = -1;
-qemuDomainObjPrivatePtr priv;
 
 if (!(vm

[libvirt] [PATCHv5 7/8] virsh: add options to query bulk stats group

2014-09-15 Thread Francesco Romani
Exports to the domstats commands the new bulk stats groups.

Signed-off-by: Francesco Romani from...@redhat.com
---
 tools/virsh-domain-monitor.c | 35 +++
 tools/virsh.pod  |  4 +++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 055d8d2..d013ca8 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -1972,6 +1972,26 @@ static const vshCmdOptDef opts_domstats[] = {
  .type = VSH_OT_BOOL,
  .help = N_(report domain state),
 },
+{.name = cpu-total,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain physical cpu usage),
+},
+{.name = balloon,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain balloon statistics),
+},
+{.name = vcpu,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain virtual cpu information),
+},
+{.name = interface,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain network interface information),
+},
+{.name = block,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain block device statistics),
+},
 {.name = list-active,
  .type = VSH_OT_BOOL,
  .help = N_(list only active domains),
@@ -2063,6 +2083,21 @@ cmdDomstats(vshControl *ctl, const vshCmd *cmd)
 if (vshCommandOptBool(cmd, state))
 stats |= VIR_DOMAIN_STATS_STATE;
 
+if (vshCommandOptBool(cmd, cpu-total))
+stats |= VIR_DOMAIN_STATS_CPU_TOTAL;
+
+if (vshCommandOptBool(cmd, balloon))
+stats |= VIR_DOMAIN_STATS_BALLOON;
+
+if (vshCommandOptBool(cmd, vcpu))
+stats |= VIR_DOMAIN_STATS_VCPU;
+
+if (vshCommandOptBool(cmd, interface))
+stats |= VIR_DOMAIN_STATS_INTERFACE;
+
+if (vshCommandOptBool(cmd, block))
+stats |= VIR_DOMAIN_STATS_BLOCK;
+
 if (vshCommandOptBool(cmd, list-active))
 flags |= VIR_CONNECT_GET_ALL_DOMAINS_STATS_ACTIVE;
 
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 5d4b12b..b929480 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -814,6 +814,7 @@ Isnapshot-create for disk snapshots) will accept either 
target
 or unique source names printed by this command.
 
 =item Bdomstats [I--raw] [I--enforce] [I--state]
+[I--cpu-total][I--balloon][I--vcpu][I--interface][I--block]
 [[I--list-active] [I--list-inactive] [I--list-persistent]
 [I--list-transient] [I--list-running] [I--list-paused]
 [I--list-shutoff] [I--list-other]] | [Idomain ...]
@@ -831,7 +832,8 @@ behavior use the I--raw flag.
 
 The individual statistics groups are selectable via specific flags. By
 default all supported statistics groups are returned. Supported
-statistics groups flags are: I--state.
+statistics groups flags are: I--state, I--cpu-total, I--balloon,
+I--vcpu, I--interface, I--block.
 
 Selecting a specific statistics groups doesn't guarantee that the
 daemon supports the selected group of stats. Flag I--enforce
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv5 5/8] qemu: bulk stats: implement interface group

2014-09-15 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_INTERFACE
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c| 14 +++
 src/qemu/qemu_driver.c   | 89 
 3 files changed, 104 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 4b851a5..17b1b43 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2516,6 +2516,7 @@ typedef enum {
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
+VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 1d91c99..5534b2f 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21621,6 +21621,20 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * vcpu.num.time - virtual cpu time spent by virtual CPU num
  * as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
+ * The typed parameter keys are in this format:
+ * net.count - number of network interfaces on this domain
+ *   as unsigned int.
+ * net.num.name - name of the interface num as string.
+ * net.num.rx.bytes - bytes received as unsigned long long.
+ * net.num.rx.pkts - packets received as unsigned long long.
+ * net.num.rx.errs - receive errors as unsigned long long.
+ * net.num.rx.drop - receive packets dropped as unsigned long long.
+ * net.num.tx.bytes - bytes transmitted as unsigned long long.
+ * net.num.tx.pkts - packets transmitted as unsigned long long.
+ * net.num.tx.errs - transmission errors as unsigned long long.
+ * net.num.tx.drop - transmit packets dropped as unsigned long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 4a92f58..016499d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17536,6 +17536,94 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 return ret;
 }
 
+#define QEMU_ADD_COUNT_PARAM(record, maxparams, type, count) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, %s.count, type); \
+if (virTypedParamsAddUInt((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  count)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NAME_PARAM(record, maxparams, type, num, name) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ %s.%zu.name, type, num); \
+if (virTypedParamsAddString((record)-params, \
+(record)-nparams, \
+maxparams, \
+param_name, \
+name)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NET_PARAM(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ net.%zu.%s, num, name); \
+if (value = 0  virTypedParamsAddULLong((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  value)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsInterface(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+struct _virDomainInterfaceStats tmp;
+
+QEMU_ADD_COUNT_PARAM(record, maxparams, net, dom-def-nnets);
+
+/* Check the path is one of the domain's network interfaces. */
+for (i = 0; i  dom-def-nnets; i++) {
+memset(tmp, 0, sizeof(tmp));
+
+if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp)  0) {
+virResetLastError();
+continue;
+}
+
+QEMU_ADD_NAME_PARAM(record, maxparams,
+net, i, dom-def-nets[i]-ifname);
+
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.bytes, tmp.rx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.pkts, tmp.rx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, i

Re: [libvirt] [PATCHv5 0/8] bulk stats: QEMU implementation

2014-09-15 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Monday, September 15, 2014 2:25:08 PM
 Subject: Re: [libvirt] [PATCHv5 0/8] bulk stats: QEMU implementation

[...]
  ChangeLog
  
  v5: address reviewer's comment
  - Eric pointed out a possible flaw in balloon stats if QEMU monitor needs
to be queried. A proper fix require further discussion and API changes
(possbily just a new flag); However, since the balloon event is available
in QEMU = 1.2, I just dropped the query and relied on the event instead.
Support for older QEMUs will be reintroduced, if needed, with following
patches.
  - fix: per-domain monitor check and reporting. (pointed out by Peter)
  - reset last error when fail silently. (pointed out by Peter)
 
 The changes look good. I've done a few finishing touches and I'm going
 to give the series some testing before pushing. The series should be
 pushed by today after I finish.
 
 Thanks for your cooperation in finishing this.

My pleasure. Thank you and the other reviewers for your work!

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv5 2/8] qemu: bulk stats: implement CPU stats group

2014-09-15 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Monday, September 15, 2014 2:49:50 PM
 Subject: Re: [libvirt] [PATCHv5 2/8] qemu: bulk stats: implement CPU stats 
 group

  +static int
  +qemuDomainGetStatsCpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
  +  virDomainObjPtr dom,
  +  virDomainStatsRecordPtr record,
  +  int *maxparams,
  +  unsigned int privflags ATTRIBUTE_UNUSED)
  +{
  +qemuDomainObjPrivatePtr priv = dom-privateData;
  +unsigned long long cpu_time = 0;
  +unsigned long long user_time = 0;
  +unsigned long long sys_time = 0;
  +int err = 0;
  +
  +err = virCgroupGetCpuacctUsage(priv-cgroup, cpu_time);
 
 This code doesn't check if priv-cgroup isn't NULL and dereferences it
 unconditionally. This would crash with shutoff machines.

Ouch. Right, of course. I tested on running VMs actually, and that explains
why I haven't catched it :(
 
 I'll add an
   if (priv-group)
   return 0;
 
 right at the beginning.

You mean

   if (!priv-cgroup)
   return 0;

I believe

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 7/8] qemu: bulk stats: add block allocation information

2014-09-12 Thread Francesco Romani


- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 9, 2014 3:19:28 PM
 Subject: Re: [libvirt] [PATCHv3 7/8] qemu: bulk stats: add block allocation 
 information
 
 On 09/08/14 15:05, Francesco Romani wrote:
  Management software, want to be able to allocate disk space on demand.
 
 s/, want/ wants/
 
  To support this, they need keep track of the space occupation
 
 s/,//
 
  of the block device.
  This information is reported by qemu as part of block stats.
  
  This patch extend the block information in the bulk stats with
  the allocation information.
  
  To keep the same behaviour, an helper is extracted from
 
 s/,// s/an/a/

Thanks, will fix both.

[...]
   int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon,
   const char *dev_name,
   qemuBlockStatsPtr bstats,
  @@ -1919,6 +1968,10 @@ int
  qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon,
   goto cleanup;
   }
   
  +/* it's ok to not have this information here. Just skip silently.
  */
  +qemuMonitorJSONDevGetBlockExtent(dev, false,
  + bstats-wr_highest_offset);
 
 As you want to ignore errors, it would probably be better just to copy
 the extraction code here without error reporting rather than extracting
 it to a helper ... this isn't something that would be reused any more.

I definitely see your point.
But I'm not really OK to replicate code, even if it's just twice.
So, I'll work a bit more on that. I'll move this patch to the end
of the series to gain a bit more time.

If I can't come up with a better approach I'll go this way
and just copy the extraction code as suggested above.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv4 5/8] qemu: bulk stats: implement interface group

2014-09-12 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_INTERFACE
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c| 14 +++
 src/qemu/qemu_driver.c   | 87 
 3 files changed, 102 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 00eafc6..b73d14b 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2514,6 +2514,7 @@ typedef enum {
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
+VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index ba7a780..d9ffb6f 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21588,6 +21588,20 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * vcpu.num.time - virtual cpu time spent by virtual CPU num
  * as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
+ * The typed parameter keys are in this format:
+ * net.count - number of network interfaces on this domain
+ *   as unsigned int.
+ * net.num.name - name of the interface num as string.
+ * net.num.rx.bytes - bytes received as unsigned long long.
+ * net.num.rx.pkts - packets received as unsigned long long.
+ * net.num.rx.errs - receive errors as unsigned long long.
+ * net.num.rx.drop - receive packets dropped as unsigned long long.
+ * net.num.tx.bytes - bytes transmitted as unsigned long long.
+ * net.num.tx.pkts - packets transmitted as unsigned long long.
+ * net.num.tx.errs - transmission errors as unsigned long long.
+ * net.num.tx.drop - transmit packets dropped as unsigned long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 12bf5e4..7e5d707 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17458,6 +17458,92 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 return ret;
 }
 
+#define QEMU_ADD_COUNT_PARAM(record, maxparams, type, count) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, %s.count, type); \
+if (virTypedParamsAddUInt((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  count)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NAME_PARAM(record, maxparams, type, num, name) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ %s.%zu.name, type, num); \
+if (virTypedParamsAddString((record)-params, \
+(record)-nparams, \
+maxparams, \
+param_name, \
+name)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NET_PARAM(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ net.%zu.%s, num, name); \
+if (value = 0  virTypedParamsAddULLong((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  value)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsInterface(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+struct _virDomainInterfaceStats tmp;
+
+QEMU_ADD_COUNT_PARAM(record, maxparams, net, dom-def-nnets);
+
+/* Check the path is one of the domain's network interfaces. */
+for (i = 0; i  dom-def-nnets; i++) {
+memset(tmp, 0, sizeof(tmp));
+
+if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp)  0)
+continue;
+
+QEMU_ADD_NAME_PARAM(record, maxparams,
+net, i, dom-def-nets[i]-ifname);
+
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.bytes, tmp.rx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.pkts, tmp.rx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.errs, tmp.rx_errs

[libvirt] [PATCHv4 3/8] qemu: bulk stats: implement balloon group

2014-09-12 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BALLOON
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  6 
 src/qemu/qemu_driver.c   | 73 
 3 files changed, 80 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 8665c6c..c005442 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2512,6 +2512,7 @@ struct _virDomainStatsRecord {
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
+VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index b22f9aa..3fa86ab 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21569,6 +21569,12 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * cpu.user - user cpu time spent as unsigned long long.
  * cpu.system - system cpu time spent as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_BALLOON: Return memory balloon device information.
+ * The typed parameter keys are in this format:
+ * balloon.current - the memory in kiB currently used
+ * as unsigned long long.
+ * balloon.maximum - the maximum memory in kiB allowed
+ * as unsigned long long.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 9014976..f677884 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -2525,6 +2525,48 @@ static int qemuDomainSendKey(virDomainPtr domain,
 return ret;
 }
 
+
+/*
+ * FIXME: this code is a stripped down version of what is done into
+ * qemuDomainGetInfo. Due to the different handling of jobs, it is not
+ * trivial to extract a common helper function.
+ */
+static int
+qemuDomainGetBalloonMemory(virQEMUDriverPtr driver, virDomainObjPtr vm,
+   unsigned long long *memory)
+{
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if (vm-def-memballoon 
+vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE) {
+*memory = vm-def-mem.max_balloon;
+} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
+*memory = vm-def-mem.cur_balloon;
+} else {
+int rv;
+unsigned long long balloon;
+
+qemuDomainObjEnterMonitor(driver, vm);
+rv = qemuMonitorGetBalloonInfo(priv-mon, balloon);
+qemuDomainObjExitMonitor(driver, vm);
+
+if (rv  0) {
+/* We couldn't get current memory allocation but that's not
+ * a show stopper; we wouldn't get it if there was a job
+ * active either
+ */
+*memory = vm-def-mem.cur_balloon;
+} else if (rv  0) {
+*memory = balloon;
+} else {
+/* Balloon not supported, so maxmem is always the allocation */
+return -1;
+}
+}
+return 0;
+}
+
+
 static int qemuDomainGetInfo(virDomainPtr dom,
  virDomainInfoPtr info)
 {
@@ -17315,6 +17357,36 @@ qemuDomainGetStatsCpu(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+static int
+qemuDomainGetStatsBalloon(virQEMUDriverPtr driver,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags)
+{
+if (HAVE_MONITOR(privflags)  virDomainObjIsActive(dom)) {
+unsigned long long cur_balloon = 0;
+int err = 0;
+
+err = qemuDomainGetBalloonMemory(driver, dom, cur_balloon);
+
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.current,
+cur_balloon)  0)
+return -1;
+}
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.maximum,
+dom-def-mem.max_balloon)  0)
+return -1;
+
+return 0;
+}
 
 typedef int
 (*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver,
@@ -17332,6 +17404,7 @@ struct qemuDomainGetStatsWorker {
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, false },
+{ qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON, true },
 { NULL, 0, false }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir

[libvirt] [PATCHv4 6/8] qemu: bulk stats: implement block group

2014-09-12 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK
group of statistics.

To do so, an helper function to get the block stats
of all the disks of a domain is added.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  20 +++
 src/qemu/qemu_driver.c   |  96 ++
 src/qemu/qemu_monitor.c  |  26 +
 src/qemu/qemu_monitor.h  |  20 +++
 src/qemu/qemu_monitor_json.c | 136 +--
 src/qemu/qemu_monitor_json.h |   4 ++
 7 files changed, 260 insertions(+), 43 deletions(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index b73d14b..da4b58e 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2515,6 +2515,7 @@ typedef enum {
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
+VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index d9ffb6f..8611361 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21602,6 +21602,26 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * net.num.tx.errs - transmission errors as unsigned long long.
  * net.num.tx.drop - transmit packets dropped as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_BLOCK: Return block devices statistics.
+ * The typed parameter keys are in this format:
+ * block.count - number of block devices on this domain
+ * as unsigned int.
+ * block.num.name - name of the block device num as string.
+ *  matches the name of the block device.
+ * block.num.rd.reqs - number of read requests as unsigned long long.
+ * block.num.rd.bytes - number of read bytes as unsigned long long.
+ * block.num.rd.times - total time (ns) spent on reads as
+ *  unsigned long long.
+ * block.num.wr.reqs - number of write requests as unsigned long long.
+ * block.num.wr.bytes - number of written bytes as unsigned long long.
+ * block.num.wr.times - total time (ns) spent on writes as
+ *  unsigned long long.
+ * block.num.fl.reqs - total flush requests as unsigned long long.
+ * block.num.fl.times - total time (ns) spent on cache flushing as
+ *  unsigned long long.
+ * block.num.errors - Xen only: the 'oo_req' value as
+ *unsigned long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7e5d707..4644f4a 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9687,6 +9687,31 @@ qemuDomainBlockStats(virDomainPtr dom,
 return ret;
 }
 
+
+/*
+ * Returns at most the first `nstats' stats, then stops.
+ * Returns the number of stats filled.
+ */
+static int
+qemuDomainHelperGetBlockStats(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  qemuBlockStatsPtr stats,
+  int nstats)
+{
+int ret;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+qemuDomainObjEnterMonitor(driver, vm);
+
+ret = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
+  stats, nstats);
+
+qemuDomainObjExitMonitor(driver, vm);
+
+return ret;
+}
+
+
 static int
 qemuDomainBlockStatsFlags(virDomainPtr dom,
   const char *path,
@@ -17541,6 +17566,76 @@ qemuDomainGetStatsInterface(virQEMUDriverPtr driver 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_NET_PARAM
 
+/* expects a LL, but typed parameter must be ULL */
+#define QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ block.%zu.%s, num, name); \
+if (value = 0  virTypedParamsAddULLong((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  value)  0) \
+goto cleanup; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags)
+{
+size_t i;
+int ret = -1;
+int nstats = 0;
+qemuBlockStatsPtr stats = NULL;
+
+if (!HAVE_MONITOR(privflags) || !virDomainObjIsActive(dom))
+return 0; /* it's ok, just go ahead silently */
+
+if (VIR_ALLOC_N(stats, dom-def-ndisks)  0)
+return -1

[libvirt] [PATCHv4 1/8] qemu: bulk stats: extend internal collection API

2014-09-12 Thread Francesco Romani
Future patches which will implement more
bulk stats groups for QEMU will need to access
the connection object.

To accomodate that, a few changes are needed:

* enrich internal prototype to pass qemu driver object.
* add per-group flag to mark if one collector needs
  monitor access or not.
* if at least one collector of the requested stats
  needs monitor access, thus we must start a query job
  for each domain. The specific collectors will
  run nested monitor jobs inside that.
* although requested, monitor could be not available.
  pass a flag to workers to signal the availability
  of monitor, in order to gather as much data as
  is possible anyway.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 60 +++---
 1 file changed, 52 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 917b286..279c8b3 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17244,7 +17244,8 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
 
 
 static int
-qemuDomainGetStatsState(virDomainObjPtr dom,
+qemuDomainGetStatsState(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
 virDomainStatsRecordPtr record,
 int *maxparams,
 unsigned int privflags ATTRIBUTE_UNUSED)
@@ -17267,8 +17268,17 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 }
 
 
+typedef enum {
+QEMU_DOMAIN_STATS_HAVE_MONITOR = (1  0), /* QEMU monitor available */
+} qemuDomainStatsFlags;
+
+
+#define HAVE_MONITOR(flags) ((flags)  QEMU_DOMAIN_STATS_HAVE_MONITOR)
+
+
 typedef int
-(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+(*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver,
+  virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
   int *maxparams,
   unsigned int flags);
@@ -17276,11 +17286,12 @@ typedef int
 struct qemuDomainGetStatsWorker {
 qemuDomainGetStatsFunc func;
 unsigned int stats;
+bool monitor;
 };
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
-{ qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
-{ NULL, 0 }
+{ qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
+{ NULL, 0, false }
 };
 
 
@@ -17312,6 +17323,20 @@ qemuDomainGetStatsCheckSupport(unsigned int *stats,
 }
 
 
+static bool
+qemuDomainGetStatsNeedMonitor(unsigned int stats)
+{
+size_t i;
+
+for (i = 0; qemuDomainGetStatsWorkers[i].func; i++)
+if (stats  qemuDomainGetStatsWorkers[i].stats)
+if (qemuDomainGetStatsWorkers[i].monitor)
+return true;
+
+return false;
+}
+
+
 static int
 qemuDomainGetStats(virConnectPtr conn,
virDomainObjPtr dom,
@@ -17329,8 +17354,8 @@ qemuDomainGetStats(virConnectPtr conn,
 
 for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
 if (stats  qemuDomainGetStatsWorkers[i].stats) {
-if (qemuDomainGetStatsWorkers[i].func(dom, tmp, maxparams,
-  flags)  0)
+if (qemuDomainGetStatsWorkers[i].func(conn-privateData, dom, tmp,
+  maxparams, flags)  0)
 goto cleanup;
 }
 }
@@ -17369,6 +17394,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 int nstats = 0;
 size_t i;
 int ret = -1;
+unsigned int privflags = 0;
 
 if (ndoms)
 virCheckFlags(VIR_CONNECT_GET_ALL_DOMAINS_STATS_ENFORCE_STATS, -1);
@@ -17403,6 +17429,9 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 if (VIR_ALLOC_N(tmpstats, ndoms + 1)  0)
 goto cleanup;
 
+if (qemuDomainGetStatsNeedMonitor(stats))
+privflags |= QEMU_DOMAIN_STATS_HAVE_MONITOR;
+
 for (i = 0; i  ndoms; i++) {
 virDomainStatsRecordPtr tmp = NULL;
 
@@ -17413,12 +17442,22 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 !virConnectGetAllDomainStatsCheckACL(conn, dom-def))
 continue;
 
-if (qemuDomainGetStats(conn, dom, stats, tmp, flags)  0)
-goto cleanup;
+if (HAVE_MONITOR(privflags) 
+ qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)  0)
+/* As it was never requested. Gather as much as possible anyway. */
+privflags = ~QEMU_DOMAIN_STATS_HAVE_MONITOR;
+
+if (qemuDomainGetStats(conn, dom, stats, tmp, privflags)  0)
+goto endjob;
 
 if (tmp)
 tmpstats[nstats++] = tmp;
 
+if (HAVE_MONITOR(privflags)  !qemuDomainObjEndJob(driver, dom)) {
+dom = NULL;
+goto cleanup;
+}
+
 virObjectUnlock(dom);
 dom = NULL;
 }
@@ -17428,6 +17467,11 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 
 ret = nstats;
 
+ endjob:
+if (HAVE_MONITOR(privflags)  dom

[libvirt] [PATCHv3 7/8] virsh: add options to query bulk stats group

2014-09-12 Thread Francesco Romani
Exports to the domstats commands the new bulk stats groups.

Signed-off-by: Francesco Romani from...@redhat.com
---
 tools/virsh-domain-monitor.c | 35 +++
 tools/virsh.pod  |  4 +++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 055d8d2..d013ca8 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -1972,6 +1972,26 @@ static const vshCmdOptDef opts_domstats[] = {
  .type = VSH_OT_BOOL,
  .help = N_(report domain state),
 },
+{.name = cpu-total,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain physical cpu usage),
+},
+{.name = balloon,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain balloon statistics),
+},
+{.name = vcpu,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain virtual cpu information),
+},
+{.name = interface,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain network interface information),
+},
+{.name = block,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain block device statistics),
+},
 {.name = list-active,
  .type = VSH_OT_BOOL,
  .help = N_(list only active domains),
@@ -2063,6 +2083,21 @@ cmdDomstats(vshControl *ctl, const vshCmd *cmd)
 if (vshCommandOptBool(cmd, state))
 stats |= VIR_DOMAIN_STATS_STATE;
 
+if (vshCommandOptBool(cmd, cpu-total))
+stats |= VIR_DOMAIN_STATS_CPU_TOTAL;
+
+if (vshCommandOptBool(cmd, balloon))
+stats |= VIR_DOMAIN_STATS_BALLOON;
+
+if (vshCommandOptBool(cmd, vcpu))
+stats |= VIR_DOMAIN_STATS_VCPU;
+
+if (vshCommandOptBool(cmd, interface))
+stats |= VIR_DOMAIN_STATS_INTERFACE;
+
+if (vshCommandOptBool(cmd, block))
+stats |= VIR_DOMAIN_STATS_BLOCK;
+
 if (vshCommandOptBool(cmd, list-active))
 flags |= VIR_CONNECT_GET_ALL_DOMAINS_STATS_ACTIVE;
 
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 60ee515..0a316dd 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -814,6 +814,7 @@ Isnapshot-create for disk snapshots) will accept either 
target
 or unique source names printed by this command.
 
 =item Bdomstats [I--raw] [I--enforce] [I--state]
+[I--cpu-total][I--balloon][I--vcpu][I--interface][I--block]
 [[I--list-active] [I--list-inactive] [I--list-persistent]
 [I--list-transient] [I--list-running] [I--list-paused]
 [I--list-shutoff] [I--list-other]] | [Idomain ...]
@@ -831,7 +832,8 @@ behavior use the I--raw flag.
 
 The individual statistics groups are selectable via specific flags. By
 default all supported statistics groups are returned. Supported
-statistics groups flags are: I--state.
+statistics groups flags are: I--state, I--cpu-total, I--balloon,
+I--vcpu, I--interface, I--block.
 
 Selecting a specific statistics groups doesn't guarantee that the
 daemon supports the selected group of stats. Flag I--enforce
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv4 0/8] bulk stats: QEMU implementation

2014-09-12 Thread Francesco Romani
This patchset enhances the QEMU support
for the new bulk stats API to include
equivalents of these APIs:

virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus

This subset of API is the one oVirt relies on.
Scale/stress test on an oVirt test environment is in progress.

The patchset is organized as follows:
- the first patch enhances the internal stats gathering API
  to accomodate the needs of the groups which extract information
  using QEMU monitor jobs.
- the next five patches implement the bulk stats groups, extracting
  helpers where do refactoring to extract internal helpers every time
  it is feasible and convenient.
- the seventh patch enhances the virsh domstats command with options
  to use the new bulk stats.
- the last patch enhances the block stats group adding the wr_highest_offset
  information, needed by oVirt for thin provisioned disks.

ChangeLog

v4: fixes and cleanups
- addressed reviewers comments (Peter, Wang Rui).
- pushed domain check into group stats functions. This follows
  the strategy to gather and report as much data as possible,
  silently skipping errors along the way.
- moved the block allocation patch to the end of the series.

v3: more polishing and fixes after first review
- addressed Eric's comments.
- squashed patches which extracts helpers with patches which
  use them.
- changed gathering strategy: now code tries to reap as much
  information as possible instead to give up and bail out with
  error. Only critical errors cause the bulk stats to fail.
- moved away from the transfer semantics. I find it error-prone
  and not flexible enough, I'd like to avoid as much as possible.
- rearranged helpers to have one single QEMU query job with
  many monitor jobs nested inside.
- fixed docs.
- implemented missing virsh domstats bits.

in v2: polishing and optimizations.
- incorporated feedback from Li Wei (thanks).
- added documentation.
- optimized block group to gather all the information with just
  one call to QEMU monitor.
- stripped to bare bones merged the 'block info' group into the
  'block' group - oVirt actually needs just one stat from there.
- reorganized the keys to be more consistent and shorter.


Francesco Romani (8):
  qemu: bulk stats: extend internal collection API
  qemu: bulk stats: implement CPU stats group
  qemu: bulk stats: implement balloon group
  qemu: bulk stats: implement VCPU group
  qemu: bulk stats: implement interface group
  qemu: bulk stats: implement block group
  virsh: add options to query bulk stats group
  qemu: bulk stats: add block allocation information

 include/libvirt/libvirt.h.in |   5 +
 src/libvirt.c|  61 +
 src/qemu/qemu_driver.c   | 575 +--
 src/qemu/qemu_monitor.c  |  26 ++
 src/qemu/qemu_monitor.h  |  21 ++
 src/qemu/qemu_monitor_json.c | 227 -
 src/qemu/qemu_monitor_json.h |   4 +
 tools/virsh-domain-monitor.c |  35 +++
 tools/virsh.pod  |   4 +-
 9 files changed, 822 insertions(+), 136 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv4 8/8] qemu: bulk stats: add block allocation information

2014-09-12 Thread Francesco Romani
Management software wants to be able to allocate disk space on demand.
To support this they need keep track of the space occupation
of the block device.
This information is reported by qemu as part of block stats.

This patch extend the block information in the bulk stats with
the allocation information.

To keep the same behaviour a helper is extracted from
qemuMonitorJSONGetBlockExtent in order to get per-device
allocation information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/libvirt.c|  2 +
 src/qemu/qemu_driver.c   | 18 +
 src/qemu/qemu_monitor.h  |  1 +
 src/qemu/qemu_monitor_json.c | 91 ++--
 4 files changed, 92 insertions(+), 20 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index 8611361..294b948 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21621,6 +21621,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  *  unsigned long long.
  * block.num.errors - Xen only: the 'oo_req' value as
  *unsigned long long.
+ * block.num.allocation - offset of the highest written sector
+ *as unsigned long long.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 4644f4a..4931e93 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17580,6 +17580,19 @@ do { \
 goto cleanup; \
 } while (0)
 
+#define QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ block.%zu.%s, num, name); \
+if (virTypedParamsAddULLong((record)-params, \
+(record)-nparams, \
+maxparams, \
+param_name, \
+value)  0) \
+goto cleanup; \
+} while (0)
+
 static int
 qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 virDomainObjPtr dom,
@@ -17625,6 +17638,9 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 fl.reqs, stats[i].flush_req);
 QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
 fl.times, stats[i].flush_total_times);
+
+QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, i,
+ allocation, stats[i].wr_highest_offset);
 }
 
 ret = 0;
@@ -17636,6 +17652,8 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
 
 #undef QEMU_ADD_BLOCK_PARAM_LL
 
+#undef QEMU_ADD_BLOCK_PARAM_ULL
+
 #undef QEMU_ADD_NAME_PARAM
 
 #undef QEMU_ADD_COUNT_PARAM
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 8e3fb44..97d7336 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -358,6 +358,7 @@ struct _qemuBlockStats {
 long long wr_total_times;
 long long flush_req;
 long long flush_total_times;
+unsigned long long wr_highest_offset;
 };
 
 int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 847dcd4..31b1676 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1774,6 +1774,40 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
 }
 
 
+typedef enum {
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_OK,
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOPARENT,
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOSTATS,
+QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOOFFSET
+} qemuMonitorBlockExtentError;
+
+
+static int
+qemuMonitorJSONDevGetBlockExtent(virJSONValuePtr dev,
+ unsigned long long *extent)
+{
+virJSONValuePtr stats;
+virJSONValuePtr parent;
+
+if ((parent = virJSONValueObjectGet(dev, parent)) == NULL ||
+parent-type != VIR_JSON_TYPE_OBJECT) {
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOPARENT;
+}
+
+if ((stats = virJSONValueObjectGet(parent, stats)) == NULL ||
+stats-type != VIR_JSON_TYPE_OBJECT) {
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOSTATS;
+}
+
+if (virJSONValueObjectGetNumberUlong(stats, wr_highest_offset,
+ extent)  0) {
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_NOOFFSET;
+}
+
+return QEMU_MONITOR_BLOCK_EXTENT_ERROR_OK;
+}
+
+
 int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon,
 const char *dev_name,
 qemuBlockStatsPtr bstats,
@@ -1910,6 +1944,9 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
 goto cleanup;
 }
 
+/* it's ok to not have this information here. Just skip silently. */
+qemuMonitorJSONDevGetBlockExtent(dev, bstats-wr_highest_offset);
+
 count++;
 bstats++;
 
@@ -2005,6 +2042,36 @@ int

[libvirt] [PATCHv4 2/8] qemu: bulk stats: implement CPU stats group

2014-09-12 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  7 +++
 src/qemu/qemu_driver.c   | 41 +
 3 files changed, 49 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 94b942c..8665c6c 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
+VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 941c518..b22f9aa 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21563,6 +21563,13 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * state.reason - reason for entering given state, returned as int from
  *  virDomain*Reason enum corresponding to given state.
  *
+ * VIR_DOMAIN_STATS_CPU_TOTAL: Return CPU statistics and usage information.
+ * The typed parameter keys are in this format:
+ * cpu.time - total cpu time spent for this domain as unsigned long long.
+ * cpu.user - user cpu time spent as unsigned long long.
+ * cpu.system - system cpu time spent as unsigned long long.
+ *
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 279c8b3..9014976 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -96,6 +96,7 @@
 #include storage/storage_driver.h
 #include virhostdev.h
 #include domain_capabilities.h
+#include vircgroup.h
 
 #define VIR_FROM_THIS VIR_FROM_QEMU
 
@@ -17276,6 +17277,45 @@ typedef enum {
 #define HAVE_MONITOR(flags) ((flags)  QEMU_DOMAIN_STATS_HAVE_MONITOR)
 
 
+static int
+qemuDomainGetStatsCpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom-privateData;
+unsigned long long cpu_time = 0;
+unsigned long long user_time = 0;
+unsigned long long sys_time = 0;
+int err = 0;
+
+err = virCgroupGetCpuacctUsage(priv-cgroup, cpu_time);
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.time,
+cpu_time)  0)
+return -1;
+
+err = virCgroupGetCpuacctStat(priv-cgroup, user_time, sys_time);
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.user,
+user_time)  0)
+return -1;
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.system,
+sys_time)  0)
+return -1;
+
+return 0;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virQEMUDriverPtr driver,
   virDomainObjPtr dom,
@@ -17291,6 +17331,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
+{ qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, false },
 { NULL, 0, false }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv4 4/8] qemu: bulk stats: implement VCPU group

2014-09-12 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_VCPU
group of statistics.
To do so, this patch also extracts a helper to gather the
VCpu information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  12 +++
 src/qemu/qemu_driver.c   | 200 +--
 3 files changed, 149 insertions(+), 64 deletions(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index c005442..00eafc6 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2513,6 +2513,7 @@ typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
+VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 3fa86ab..ba7a780 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21576,6 +21576,18 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * balloon.maximum - the maximum memory in kiB allowed
  * as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_VCPU: Return virtual CPU statistics.
+ * Due to VCPU hotplug, the vcpu.num.* array could be sparse.
+ * The actual size of the array correspond to vcpu.current.
+ * The array size will never exceed vcpu.maximum.
+ * The typed parameter keys are in this format:
+ * vcpu.current - current number of online virtual CPUs as unsigned int.
+ * vcpu.maximum - maximum number of online virtual CPUs as unsigned int.
+ * vcpu.num.state - state of the virtual CPU num, as int
+ *  from virVcpuState enum.
+ * vcpu.num.time - virtual cpu time spent by virtual CPU num
+ * as unsigned long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index f677884..12bf5e4 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1380,6 +1380,76 @@ qemuGetProcessInfo(unsigned long long *cpuTime, int 
*lastCpu, long *vm_rss,
 }
 
 
+static int
+qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo,
+ unsigned char *cpumaps, int maplen)
+{
+int maxcpu, hostcpus;
+size_t i, v;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if ((hostcpus = nodeGetCPUCount())  0)
+return -1;
+
+maxcpu = maplen * 8;
+if (maxcpu  hostcpus)
+maxcpu = hostcpus;
+
+/* Clamp to actual number of vcpus */
+if (maxinfo  priv-nvcpupids)
+maxinfo = priv-nvcpupids;
+
+if (maxinfo = 1) {
+if (info != NULL) {
+memset(info, 0, sizeof(*info) * maxinfo);
+for (i = 0; i  maxinfo; i++) {
+info[i].number = i;
+info[i].state = VIR_VCPU_RUNNING;
+
+if (priv-vcpupids != NULL 
+qemuGetProcessInfo((info[i].cpuTime),
+   (info[i].cpu),
+   NULL,
+   vm-pid,
+   priv-vcpupids[i])  0) {
+virReportSystemError(errno, %s,
+ _(cannot get vCPU placement  pCPU 
time));
+return -1;
+}
+}
+}
+
+if (cpumaps != NULL) {
+memset(cpumaps, 0, maplen * maxinfo);
+if (priv-vcpupids != NULL) {
+for (v = 0; v  maxinfo; v++) {
+unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v);
+virBitmapPtr map = NULL;
+unsigned char *tmpmap = NULL;
+int tmpmapLen = 0;
+
+if (virProcessGetAffinity(priv-vcpupids[v],
+  map, maxcpu)  0)
+return -1;
+virBitmapToData(map, tmpmap, tmpmapLen);
+if (tmpmapLen  maplen)
+tmpmapLen = maplen;
+memcpy(cpumap, tmpmap, tmpmapLen);
+
+VIR_FREE(tmpmap);
+virBitmapFree(map);
+}
+} else {
+virReportError(VIR_ERR_OPERATION_INVALID,
+   %s, _(cpu affinity is not available));
+return -1;
+}
+}
+}
+return maxinfo;
+}
+
+
 static virDomainPtr qemuDomainLookupByID(virConnectPtr conn,
  int id)
 {
@@ -5001,10 +5071,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
int maplen)
 {
 virDomainObjPtr vm;
-size_t i;
-int v, maxcpu, hostcpus;
 int ret = -1;
-qemuDomainObjPrivatePtr priv;
 
 if (!(vm

Re: [libvirt] [PATCHv4 6/8] qemu: bulk stats: implement block group

2014-09-12 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Friday, September 12, 2014 3:56:06 PM
 Subject: Re: [libvirt] [PATCHv4 6/8] qemu: bulk stats: implement block group
 
 On 09/12/14 13:48, Francesco Romani wrote:
  This patch implements the VIR_DOMAIN_STATS_BLOCK
  group of statistics.
  
  To do so, an helper function to get the block stats
  of all the disks of a domain is added.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   include/libvirt/libvirt.h.in |   1 +
   src/libvirt.c|  20 +++
   src/qemu/qemu_driver.c   |  96 ++
   src/qemu/qemu_monitor.c  |  26 +
   src/qemu/qemu_monitor.h  |  20 +++
   src/qemu/qemu_monitor_json.c | 136
   +--
   src/qemu/qemu_monitor_json.h |   4 ++
   7 files changed, 260 insertions(+), 43 deletions(-)
  
 
 
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index 7e5d707..4644f4a 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -9687,6 +9687,31 @@ qemuDomainBlockStats(virDomainPtr dom,
   return ret;
   }
   
  +
  +/*
  + * Returns at most the first `nstats' stats, then stops.
  + * Returns the number of stats filled.
  + */
  +static int
  +qemuDomainHelperGetBlockStats(virQEMUDriverPtr driver,
  +  virDomainObjPtr vm,
  +  qemuBlockStatsPtr stats,
  +  int nstats)
  +{
  +int ret;
  +qemuDomainObjPrivatePtr priv = vm-privateData;
  +
  +qemuDomainObjEnterMonitor(driver, vm);
  +
  +ret = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
  +  stats, nstats);
 
 
 Humm, is it worth doing this helper? This pretty much can be inlined as
 it has only one caller.

Right, qemuDomainHelperGetBlockStats add little to none value, so I'll drop it.
I believe qemuMonitorGetAllBlockStatsInfo should stay, however:
I don't see JSON monitor being called directly anywhere. So I'll keep it.

[...]
  +static int
  +qemuDomainGetStatsBlock(virQEMUDriverPtr driver,
  +virDomainObjPtr dom,
  +virDomainStatsRecordPtr record,
  +int *maxparams,
  +unsigned int privflags)
  +{
  +size_t i;
  +int ret = -1;
  +int nstats = 0;
  +qemuBlockStatsPtr stats = NULL;
  +
  +if (!HAVE_MONITOR(privflags) || !virDomainObjIsActive(dom))
  +return 0; /* it's ok, just go ahead silently */
  +
  +if (VIR_ALLOC_N(stats, dom-def-ndisks)  0)
  +return -1;
  +
  +nstats = qemuDomainHelperGetBlockStats(driver, dom, stats,
  +   dom-def-ndisks);
  +if (nstats  0)
 
 Are we erroring out on block stats failure? Other statistics gatherers
 just skip it if it's not available.

Right, will fix to make it silently skip as the other groups.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 4/8] qemu: bulk stats: implement VCPU group

2014-09-11 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 9, 2014 1:56:09 PM
 Subject: Re: [libvirt] [PATCHv3 4/8] qemu: bulk stats: implement VCPU group

  + * VIR_DOMAIN_STATS_VCPU: Return virtual CPU statistics.
  + * Due to VCPU hotplug, the vcpu.num.* array could be sparse.
  + * The actual size of the array correspond to vcpu.current.
  + * The array size will never exceed vcpu.maximum.
  + * The typed parameter keys are in this format:
  + * vcpu.current - current number of online virtual CPUs as unsigned int.
  + * vcpu.maximum - maximum number of online virtual CPUs as unsigned int.
  + * vcpu.num.state - state of the virtual CPU num, as int
  + *  from virVcpuState enum.
  + * vcpu.num.time - virtual cpu time spent by virtual CPU num
  + * as unsigned long long.
  + * vcpu.num.cpu - physical CPU pinned to virtual CPU num as int.
 
 This is not the CPU number the vCPU is pinned to but rather the current
 CPU number where the vCPU is actually running. If you pin it to multiple
 CPUs this may change in the range of the host CPUs the vCPU is pinned
 to. Said this I don't think this is an useful stat.

Right, my bad, I overlooked the docs (started to suspect when saw it changing 
too often
in my tests..).

I agree this is not very useful, I'll drop it.

 Rather than this I'd like to see the mask of the host CPUs where this
 vCPU is pinned to. (returned as a human readable bitmask string).
 
 Any thoughts?

Is that the data provided by 
http://libvirt.org/html/libvirt-libvirt.html#virDomainGetVcpuPinInfo
it isn't? (I'm asking because docs aren't crystal clear for me).

I like this, but I'd also need to do a cross-check on my our code in oVirt.

Will be acceptable to drop the misleading vcpu.num.cpu info and to add
the pin info in a new followup patch, in this stats group?

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 4/8] qemu: bulk stats: implement VCPU group

2014-09-11 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Thursday, September 11, 2014 6:07:48 PM
 Subject: Re: [libvirt] [PATCHv3 4/8] qemu: bulk stats: implement VCPU group
  I like this, but I'd also need to do a cross-check on my our code in oVirt.
[...]
  Will be acceptable to drop the misleading vcpu.num.cpu info and to add
  the pin info in a new followup patch, in this stats group?
 
 You definitely can add that later on. But you should drop .cpu from this
 patch (not revert it later).

Very good, next submission (v4) will drop the misleading 'cpu' item,
Hopefully I'll also be able to add the pin information;
otherwise I'll save for a followup patch.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 5/8] qemu: bulk stats: implement interface group

2014-09-11 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 9, 2014 1:42:15 PM
 Subject: Re: [libvirt] [PATCHv3 5/8] qemu: bulk stats: implement interface 
 group

  + * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
  + * The typed parameter keys are in this format:
  + * net.count - number of network interfaces on this domain
  + *   as unsigned int.
  + * net.num.name - name of the interface num as string.
  + * net.num.rx.bytes - bytes received as long long.
  + * net.num.rx.pkts - packets received as long long.
  + * net.num.rx.errs - receive errors as long long.
  + * net.num.rx.drop - receive packets dropped as long long.
  + * net.num.tx.bytes - bytes transmitted as long long.
  + * net.num.tx.pkts - packets transmitted as long long.
  + * net.num.tx.errs - transmission errors as long long.
  + * net.num.tx.drop - transmit packets dropped as long long.
 
 Why are all of those represented as long long instead of unsigned long
 long? I don't see how these could be negative. If we need to express
 that the value is unsupported we can just drop it from here and not
 waste half of the range here.
 
 Any other opinions on this?

I used long long because of this:

struct _virDomainInterfaceStats {
long long rx_bytes;
long long rx_packets;
long long rx_errs;
long long rx_drop;
long long tx_bytes;
long long tx_packets;
long long tx_errs;
long long tx_drop;
};

But I don't have any problem to cast them as unsigned, with something like:

#define QEMU_ADD_NET_PARAM(record, maxparams, num, name, value) \
do { \
char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
 net.%u.%s, num, name); \
if (virTypedParamsAddULLong((record)-params, \
(record)-nparams, \
maxparams, \
param_name, \
(unsigned long long)value)  0) \
return -1; \
} while (0)


 
  + *
* Using 0 for @stats returns all stats groups supported by the given
* hypervisor.
*
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index 6bcbfb5..989eb3e 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -17537,6 +17537,92 @@ qemuDomainGetStatsVcpu(virConnectPtr conn
  ATTRIBUTE_UNUSED,
   return ret;
   }
   
  +#define QEMU_ADD_COUNT_PARAM(record, maxparams, type, count) \
  +do { \
  +char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
  +snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, %s.count, type);
  \
  +if (virTypedParamsAddUInt((record)-params, \
  +  (record)-nparams, \
  +  maxparams, \
  +  param_name, \
  +  count)  0) \
  +return -1; \
  +} while (0)
  +
  +#define QEMU_ADD_NAME_PARAM(record, maxparams, type, num, name) \
  +do { \
  +char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
  +snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
  + %s.%lu.name, type, num); \
  +if (virTypedParamsAddString((record)-params, \
  +(record)-nparams, \
  +maxparams, \
  +param_name, \
  +name)  0) \
  +return -1; \
  +} while (0)
  +
  +#define QEMU_ADD_NET_PARAM(record, maxparams, num, name, value) \
  +do { \
  +char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
  +snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
  + net.%lu.%s, num, name); \
 
 %lu? the count is unsigned int so you should be fine with %d

Yep but the cycle counter is size_t and then...

$ git diff
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 9d53883..e90a8c6 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17487,7 +17487,7 @@ do { \
 do { \
 char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
 snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
- net.%lu.%s, num, name); \
+ net.%u.%s, num, name); \
 if (virTypedParamsAddLLong((record)-params, \
(record)-nparams, \
maxparams, \
$ make
[...]
make[1]: Entering directory `/home/fromani/Projects/libvirt/src'
  CC   qemu/libvirt_driver_qemu_impl_la-qemu_driver.lo
qemu/qemu_driver.c: In function 'qemuDomainGetStatsInterface':
qemu/qemu_driver.c:17521:9: error: format '%u' expects argument of type 
'unsigned int', but argument 4 has type 'size_t' [-Werror=format=]
 QEMU_ADD_NET_PARAM(record, maxparams, i,
 ^
qemu/qemu_driver.c:17521:9: error: format '%u' expects argument of type 
'unsigned int', but argument 4 has type 'size_t' [-Werror=format=]
$ gcc --version
gcc

Re: [libvirt] [PATCHv3 4/8] qemu: bulk stats: implement VCPU group

2014-09-10 Thread Francesco Romani
  +static int
  +qemuDomainGetStatsVcpu(virConnectPtr conn ATTRIBUTE_UNUSED,
  +   virDomainObjPtr dom,
  +   virDomainStatsRecordPtr record,
  +   int *maxparams,
  +   unsigned int privflags ATTRIBUTE_UNUSED)
  +{
  +size_t i;
  +int ret = -1;
  +char param_name[VIR_TYPED_PARAM_FIELD_LENGTH];
  +virVcpuInfoPtr cpuinfo = NULL;
  +
  +if (virTypedParamsAddUInt(record-params,
  +  record-nparams,
  +  maxparams,
  +  vcpu.current,
  +  (unsigned) dom-def-vcpus)  0)
  +return -1;
  +
  +if (virTypedParamsAddUInt(record-params,
  +  record-nparams,
  +  maxparams,
  +  vcpu.maximum,
  +  (unsigned) dom-def-maxvcpus)  0)
  +return -1;
  +
  +if (VIR_ALLOC_N(cpuinfo, dom-def-vcpus)  0)
  +return -1;
  +
  +if ((ret = qemuDomainHelperGetVcpus(dom,
  +cpuinfo,
  +dom-def-vcpus,
  +NULL,
  +0))  0)
  +return 0;
 
 Memory of 'cpuinfo' will be leaked. Should we go to cleanup?

Ouch. Of course I do need to avoid this. Will fix.

Thanks, 

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/8] qemu: bulk stats: extend internal collection API

2014-09-10 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Wednesday, September 10, 2014 10:07:33 AM
 Subject: Re: [libvirt] [PATCH 1/8] qemu: bulk stats: extend internal 
 collection API

[...]
  Hmm this skips offline domains entirely if one of the stats groups needs
  the monitor.
 
  I think we should rather skip individual stats groups, or better stats
  fields that we can't provide.
 
  Any ideas?
  
  What about this (pseudo-C):
  
  unsigned int privflags = 0;
  
  if (needmon  qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)
  needmon = false;
  /* auto disable monitoring, the remainder of the function should be
  unchanged */
  else
  privflags |= MONITOR_AVAILABLE;
   
  if ((needmon  virDomainObjIsActive(dom)) || !needmon) {
  if (qemuDomainGetStats(conn, dom, stats, tmp, privflags)  0)
  /* pass monitor availability down the chain. Individual workers
  will
 bail out immediately and silently if they need monitor but
 it is
 not available
   */
  goto endjob;
  
  if (tmp)
  tmpstats[nstats++] = tmp;
  }
  
  
  No other change should be needed to this patch, and with trivial changes
  all the others can be fixed.
 
 Also you can just grab the lock always and the workers will exit if the
 VM is not alive. Having a domain job should be fine.

As you prefer. For oVirt, we do care of all the stats group implemented, and 
always all of them,
(actually I may have missed some bits, e.g. the pininfo as you pointed out 
elsewhere - going to fix),
so for our needs we'll always need to enter the monitor.

But other users of this API may beg to differ, and I believe is fair to assume 
that other consumers
of this API could just use stats which doesn't require to enter the monitor: 
e.g. CPU/VCPU/interface.

Hence, I was trying to be a good citizen and do not require monitor access 
unless it is actually
needed; but if turns out it is OK to do a domain job anyway, I'll happily 
simplify my code :)

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 2/8] qemu: bulk stats: implement CPU stats group

2014-09-10 Thread Francesco Romani
- Original Message -
 From: Wang Rui moon.wang...@huawei.com
 To: Francesco Romani from...@redhat.com
 Cc: libvir-list@redhat.com, pkre...@redhat.com
 Sent: Wednesday, September 10, 2014 10:56:47 AM
 Subject: Re: [libvirt] [PATCHv3 2/8] qemu: bulk stats: implement CPU stats 
 group
 
 On 2014/9/8 21:05, Francesco Romani wrote:
  This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
  group of statistics.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   include/libvirt/libvirt.h.in |  1 +
   src/libvirt.c|  9 
   src/qemu/qemu_driver.c   | 51
   
   3 files changed, 61 insertions(+)
 [...]
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index 2950a4b..cfc5941 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -96,6 +96,7 @@
   #include storage/storage_driver.h
   #include virhostdev.h
   #include domain_capabilities.h
  +#include vircgroup.h
 
 Hi, Francesco.
 I see the file including relationship. 'qemu_driver.c' includes
 'qemu_cgroup.h' which
 includes 'vircgroup.h'. There are other virCgroupGet* functions called in
 qemu_driver.c
 now. So I think here include vircgroup.h is not necessary.

Thanks for the research, I'll remove

   #define VIR_FROM_THIS VIR_FROM_QEMU
   
  @@ -17338,6 +17339,55 @@ qemuDomainGetStatsState(virConnectPtr conn
  ATTRIBUTE_UNUSED,
   }
   
   
  +static int
  +qemuDomainGetStatsCpu(virConnectPtr conn ATTRIBUTE_UNUSED,
  +  virDomainObjPtr dom,
  +  virDomainStatsRecordPtr record,
  +  int *maxparams,
  +  unsigned int privflags ATTRIBUTE_UNUSED)
  +{
  +qemuDomainObjPrivatePtr priv = dom-privateData;
  +unsigned long long cpu_time = 0;
  +unsigned long long user_time = 0;
  +unsigned long long sys_time = 0;
  +int ncpus = 0;
  +int err;
  +
  +ncpus = nodeGetCPUCount();
  +if (ncpus  0 
  +virTypedParamsAddUInt(record-params,
  +  record-nparams,
  +  maxparams,
  +  cpu.count,
  +  (unsigned int)ncpus)  0)
  +return -1;
  +
  +err = virCgroupGetCpuacctUsage(priv-cgroup, cpu_time);
  +if (!err  virTypedParamsAddULLong(record-params,
  +record-nparams,
  +maxparams,
  +cpu.time,
  +cpu_time)  0)
  +return -1;
  +err = virCgroupGetCpuacctStat(priv-cgroup, user_time, sys_time);
  +if (!err  virTypedParamsAddULLong(record-params,
  +record-nparams,
  +maxparams,
  +cpu.user,
  +user_time)  0)
  +return -1;
  +if (!err  virTypedParamsAddULLong(record-params,
  +record-nparams,
  +maxparams,
  +cpu.system,
  +sys_time)  0)
  +return -1;
 
 1. If any of the 'err's is not zero, the function may returns 0 as success.
Is this the expected return? Or at least we can give a warning that we
miss some parameters.
 2. I think it's better to report an error or warning log before return -1.

The idea here (well, at least my idea :) ) is to gather as much as data as 
possible,
and to silently skip failures here. The lack of expected output is a good enough
indicator that a domain is unresponsive.
-1 is reported for really unrecoverable error, like memory (re)allocation 
failure.

Otherwise, how could the client code be meaningfully informed that some group
failed, maybe partially? It is possible that different groups fail for different
domains: how could we convey this information?

I have no problems to go this route if there is consensus this is the preferred
way to go, but then we'll need to discuss how to convey a meaningful error 
convention.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 2/8] qemu: bulk stats: implement CPU stats group

2014-09-09 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 9, 2014 1:50:25 PM
 Subject: Re: [libvirt] [PATCHv3 2/8] qemu: bulk stats: implement CPU stats 
 group
 
 On 09/08/14 15:05, Francesco Romani wrote:
  This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
  group of statistics.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   include/libvirt/libvirt.h.in |  1 +
   src/libvirt.c|  9 
   src/qemu/qemu_driver.c   | 51
   
   3 files changed, 61 insertions(+)
  
  diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
  index aced31c..e6ed803 100644
  --- a/include/libvirt/libvirt.h.in
  +++ b/include/libvirt/libvirt.h.in
  @@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
   
   typedef enum {
   VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
  +VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
   } virDomainStatsTypes;
   
   typedef enum {
  diff --git a/src/libvirt.c b/src/libvirt.c
  index 4806535..4d504ff 100644
  --- a/src/libvirt.c
  +++ b/src/libvirt.c
  @@ -21554,6 +21554,15 @@ virConnectGetDomainCapabilities(virConnectPtr
  conn,
* state.reason - reason for entering given state, returned as int from
*  virDomain*Reason enum corresponding to given state.
*
  + * VIR_DOMAIN_STATS_CPU_TOTAL: Return CPU statistics and usage
  information.
  + * The typed parameter keys are in this format:
  + * cpu.count - number as unsigned int of physical cpus available to
  + *   this domain.
 
 This is not really a VM property rather than a host property. I don't
 think we should report this as it will be the same for all VMs on the host.

I'm OK with this. Just tried to mimic as closesly as possible the existing
behaviour, but you have a point here.

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv3 3/8] qemu: bulk stats: implement balloon group

2014-09-09 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 9, 2014 1:45:57 PM
 Subject: Re: [libvirt] [PATCHv3 3/8] qemu: bulk stats: implement balloon  
 group
 
 On 09/08/14 15:05, Francesco Romani wrote:
  This patch implements the VIR_DOMAIN_STATS_BALLOON
  group of statistics.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   include/libvirt/libvirt.h.in |  1 +
   src/libvirt.c|  6 
   src/qemu/qemu_driver.c   | 70
   
   3 files changed, 77 insertions(+)
  
  diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
  index e6ed803..1e4e428 100644
  --- a/include/libvirt/libvirt.h.in
  +++ b/include/libvirt/libvirt.h.in
  @@ -2512,6 +2512,7 @@ struct _virDomainStatsRecord {
   typedef enum {
   VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
   VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
  +VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
   } virDomainStatsTypes;
   
   typedef enum {
  diff --git a/src/libvirt.c b/src/libvirt.c
  index 4d504ff..f21eb39 100644
  --- a/src/libvirt.c
  +++ b/src/libvirt.c
  @@ -21562,6 +21562,12 @@ virConnectGetDomainCapabilities(virConnectPtr
  conn,
* cpu.user - user cpu time spent as unsigned long long.
* cpu.system - system cpu time spent as unsigned long long.
*
  + * VIR_DOMAIN_STATS_BALLOON: Return memory balloon device information.
  + * The typed parameter keys are in this format:
  + * balloon.current - the memory in kiB currently used
  + * as unsigned long long.
  + * balloon.maximum - the maximum memory in kiB allowed
  + * as unsigned long long.
*
* Using 0 for @stats returns all stats groups supported by the given
* hypervisor.
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index cfc5941..4f8ccac 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -2520,6 +2520,47 @@ static int qemuDomainSendKey(virDomainPtr domain,
   return ret;
   }
   
  +
  +/*
  + * FIXME: this code is a stripped down version of what is done into
  + * qemuDomainGetInfo. Due to the different handling of jobs, it is not
  + * trivial to extract a common helper function.
  + */
  +static void
  +qemuDomainGetBalloonMemory(virQEMUDriverPtr driver, virDomainObjPtr vm,
  +   unsigned long *memory)
 
 Use unsigned long long here. Unsigned long is 32 bit on 32bit systems.

Will fix
 
  +{
  +qemuDomainObjPrivatePtr priv = vm-privateData;
  +
  +if (vm-def-memballoon 
  +vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE) {
  +*memory = vm-def-mem.max_balloon;
  +} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
  +*memory = vm-def-mem.cur_balloon;
  +} else {
  +int err;
  +unsigned long long balloon;
 
 Note this ...
 
  +
  +qemuDomainObjEnterMonitor(driver, vm);
  +err = qemuMonitorGetBalloonInfo(priv-mon, balloon);
  +qemuDomainObjExitMonitor(driver, vm);
  +
  +if (err  0) {
  +/* We couldn't get current memory allocation but that's not
  + * a show stopper; we wouldn't get it if there was a job
  + * active either
  + */
  +*memory = vm-def-mem.cur_balloon;
  +} else if (err == 0) {
  +/* Balloon not supported, so maxmem is always the allocation
  */
 
 Should we in such case drop the balloon stat from the output?

That would be ok for me. Just let me know :)

 
  +*memory = vm-def-mem.max_balloon;
  +} else {
  +*memory = balloon;
 
 Here it'd break.

Good catch, will fix.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/8] qemu: bulk stats: extend internal collection API

2014-09-09 Thread Francesco Romani


- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 9, 2014 2:14:23 PM
 Subject: Re: [libvirt] [PATCH 1/8] qemu: bulk stats: extend internal 
 collection API
 
 On 09/08/14 15:05, Francesco Romani wrote:
  Future patches which will implement more
  bulk stats groups for QEMU will need to access
  the connection object.
  
  To accomodate that, a few changes are needed:
  
  * enrich internal prototype to pass connection object.
  * add per-group flag to mark if one collector needs
monitor access or not.
  * if at least one collector of the requested stats
needs monitor access, thus we must start a query job
for each domain. The specific collectors will
run nested monitor jobs inside that.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   src/qemu/qemu_driver.c | 51
   ++
   1 file changed, 43 insertions(+), 8 deletions(-)
  
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index d724eeb..2950a4b 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
 
  @@ -17338,7 +17339,8 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
   
   
   typedef int
  -(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
  +(*qemuDomainGetStatsFunc)(virConnectPtr conn,
 
 Looking through the rest of the series. Rather than the complete
 connection object you need just the virQEMUDriverPtr for entering the
 monitor, but I can live with this.

Since I need to resubmit to address your comments, I'll fix this
to pass just virQEMUDriverPtr.
 
   
  -if (qemuDomainGetStats(conn, dom, stats, tmp, flags)  0)
  +if (needmon  qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)
   0)
   goto cleanup;
   
  -if (tmp)
  -tmpstats[nstats++] = tmp;
  +if ((needmon  virDomainObjIsActive(dom)) || !needmon) {
 
 Hmm this skips offline domains entirely if one of the stats groups needs
 the monitor.
 
 I think we should rather skip individual stats groups, or better stats
 fields that we can't provide.
 
 Any ideas?

What about this (pseudo-C):

unsigned int privflags = 0;

if (needmon  qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)
needmon = false;
/* auto disable monitoring, the remainder of the function should be 
unchanged */
else
privflags |= MONITOR_AVAILABLE;
 
if ((needmon  virDomainObjIsActive(dom)) || !needmon) {
if (qemuDomainGetStats(conn, dom, stats, tmp, privflags)  0)
/* pass monitor availability down the chain. Individual workers will
   bail out immediately and silently if they need monitor but it is
   not available
 */
goto endjob;

if (tmp)
tmpstats[nstats++] = tmp;
}


No other change should be needed to this patch, and with trivial changes
all the others can be fixed.


-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv3 4/8] qemu: bulk stats: implement VCPU group

2014-09-08 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_VCPU
group of statistics.
To do so, this patch also extracts a helper to gather the
VCpu information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  13 +++
 src/qemu/qemu_driver.c   | 210 ++-
 3 files changed, 160 insertions(+), 64 deletions(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 1e4e428..68573a0 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2513,6 +2513,7 @@ typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
+VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index f21eb39..0326847 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21569,6 +21569,19 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * balloon.maximum - the maximum memory in kiB allowed
  * as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_VCPU: Return virtual CPU statistics.
+ * Due to VCPU hotplug, the vcpu.num.* array could be sparse.
+ * The actual size of the array correspond to vcpu.current.
+ * The array size will never exceed vcpu.maximum.
+ * The typed parameter keys are in this format:
+ * vcpu.current - current number of online virtual CPUs as unsigned int.
+ * vcpu.maximum - maximum number of online virtual CPUs as unsigned int.
+ * vcpu.num.state - state of the virtual CPU num, as int
+ *  from virVcpuState enum.
+ * vcpu.num.time - virtual cpu time spent by virtual CPU num
+ * as unsigned long long.
+ * vcpu.num.cpu - physical CPU pinned to virtual CPU num as int.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 4f8ccac..6bcbfb5 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1375,6 +1375,76 @@ qemuGetProcessInfo(unsigned long long *cpuTime, int 
*lastCpu, long *vm_rss,
 }
 
 
+static int
+qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo,
+ unsigned char *cpumaps, int maplen)
+{
+int v, maxcpu, hostcpus;
+size_t i;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if ((hostcpus = nodeGetCPUCount())  0)
+return -1;
+
+maxcpu = maplen * 8;
+if (maxcpu  hostcpus)
+maxcpu = hostcpus;
+
+/* Clamp to actual number of vcpus */
+if (maxinfo  priv-nvcpupids)
+maxinfo = priv-nvcpupids;
+
+if (maxinfo = 1) {
+if (info != NULL) {
+memset(info, 0, sizeof(*info) * maxinfo);
+for (i = 0; i  maxinfo; i++) {
+info[i].number = i;
+info[i].state = VIR_VCPU_RUNNING;
+
+if (priv-vcpupids != NULL 
+qemuGetProcessInfo((info[i].cpuTime),
+   (info[i].cpu),
+   NULL,
+   vm-pid,
+   priv-vcpupids[i])  0) {
+virReportSystemError(errno, %s,
+ _(cannot get vCPU placement  pCPU 
time));
+return -1;
+}
+}
+}
+
+if (cpumaps != NULL) {
+memset(cpumaps, 0, maplen * maxinfo);
+if (priv-vcpupids != NULL) {
+for (v = 0; v  maxinfo; v++) {
+unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v);
+virBitmapPtr map = NULL;
+unsigned char *tmpmap = NULL;
+int tmpmapLen = 0;
+
+if (virProcessGetAffinity(priv-vcpupids[v],
+  map, maxcpu)  0)
+return -1;
+virBitmapToData(map, tmpmap, tmpmapLen);
+if (tmpmapLen  maplen)
+tmpmapLen = maplen;
+memcpy(cpumap, tmpmap, tmpmapLen);
+
+VIR_FREE(tmpmap);
+virBitmapFree(map);
+}
+} else {
+virReportError(VIR_ERR_OPERATION_INVALID,
+   %s, _(cpu affinity is not available));
+return -1;
+}
+}
+}
+return maxinfo;
+}
+
+
 static virDomainPtr qemuDomainLookupByID(virConnectPtr conn,
  int id)
 {
@@ -4994,10 +5064,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
int maplen)
 {
 virDomainObjPtr vm;
-size_t i;
-int v, maxcpu, hostcpus;
 int

[libvirt] [PATCH 1/8] qemu: bulk stats: extend internal collection API

2014-09-08 Thread Francesco Romani
Future patches which will implement more
bulk stats groups for QEMU will need to access
the connection object.

To accomodate that, a few changes are needed:

* enrich internal prototype to pass connection object.
* add per-group flag to mark if one collector needs
  monitor access or not.
* if at least one collector of the requested stats
  needs monitor access, thus we must start a query job
  for each domain. The specific collectors will
  run nested monitor jobs inside that.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 51 ++
 1 file changed, 43 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d724eeb..2950a4b 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17314,7 +17314,8 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
 
 
 static int
-qemuDomainGetStatsState(virDomainObjPtr dom,
+qemuDomainGetStatsState(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
 virDomainStatsRecordPtr record,
 int *maxparams,
 unsigned int privflags ATTRIBUTE_UNUSED)
@@ -17338,7 +17339,8 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 
 
 typedef int
-(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+(*qemuDomainGetStatsFunc)(virConnectPtr conn,
+  virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
   int *maxparams,
   unsigned int flags);
@@ -17346,11 +17348,12 @@ typedef int
 struct qemuDomainGetStatsWorker {
 qemuDomainGetStatsFunc func;
 unsigned int stats;
+bool monitor;
 };
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
-{ qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
-{ NULL, 0 }
+{ qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
+{ NULL, 0, false }
 };
 
 
@@ -17382,6 +17385,20 @@ qemuDomainGetStatsCheckSupport(unsigned int *stats,
 }
 
 
+static bool
+qemuDomainGetStatsNeedMonitor(unsigned int stats)
+{
+size_t i;
+
+for (i = 0; qemuDomainGetStatsWorkers[i].func; i++)
+if (stats  qemuDomainGetStatsWorkers[i].stats)
+if (qemuDomainGetStatsWorkers[i].monitor)
+return true;
+
+return false;
+}
+
+
 static int
 qemuDomainGetStats(virConnectPtr conn,
virDomainObjPtr dom,
@@ -17399,7 +17416,7 @@ qemuDomainGetStats(virConnectPtr conn,
 
 for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
 if (stats  qemuDomainGetStatsWorkers[i].stats) {
-if (qemuDomainGetStatsWorkers[i].func(dom, tmp, maxparams,
+if (qemuDomainGetStatsWorkers[i].func(conn, dom, tmp, maxparams,
   flags)  0)
 goto cleanup;
 }
@@ -17435,6 +17452,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 virDomainObjPtr dom = NULL;
 virDomainStatsRecordPtr *tmpstats = NULL;
 bool enforce = !!(flags  VIR_CONNECT_GET_ALL_DOMAINS_STATS_ENFORCE_STATS);
+bool needmon = false;
 int ntempdoms;
 int nstats = 0;
 size_t i;
@@ -17473,6 +17491,8 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 if (VIR_ALLOC_N(tmpstats, ndoms + 1)  0)
 goto cleanup;
 
+needmon = qemuDomainGetStatsNeedMonitor(stats);
+
 for (i = 0; i  ndoms; i++) {
 virDomainStatsRecordPtr tmp = NULL;
 
@@ -17483,11 +17503,21 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 !virConnectGetAllDomainStatsCheckACL(conn, dom-def))
 continue;
 
-if (qemuDomainGetStats(conn, dom, stats, tmp, flags)  0)
+if (needmon  qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY)  0)
 goto cleanup;
 
-if (tmp)
-tmpstats[nstats++] = tmp;
+if ((needmon  virDomainObjIsActive(dom)) || !needmon) {
+if (qemuDomainGetStats(conn, dom, stats, tmp, flags)  0)
+goto endjob;
+
+if (tmp)
+tmpstats[nstats++] = tmp;
+}
+
+if (needmon  !qemuDomainObjEndJob(driver, dom)) {
+dom = NULL;
+goto cleanup;
+}
 
 virObjectUnlock(dom);
 dom = NULL;
@@ -17498,6 +17528,11 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
 
 ret = nstats;
 
+ endjob:
+if (needmon  dom)
+if (!qemuDomainObjEndJob(driver, dom))
+dom = NULL;
+
  cleanup:
 if (dom)
 virObjectUnlock(dom);
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv3 0/8] bulk stats: QEMU implementation

2014-09-08 Thread Francesco Romani
This patchset enhances the QEMU support
for the new bulk stats API to include
equivalents of these APIs:

virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus

This subset of API is the one oVirt relies on.
Scale/stress test on an oVirt test environment is in progress.

changes in v3: more polishing and fixes after first review
- addressed Eric's comments.
- squashed patches which extracts helpers with patches which
  use them.
- changed gathering strategy: now code tries to reap as much
  information as possible instead to give up and bail out with
  error. Only critical errors cause the bulk stats to fail.
- moved away from the transfer semantics. I find it error-prone
  and not flexible enough, I'd like to avoid as much as possible.
- rearranged helpers to have one single QEMU query job with
  many monitor jobs nested inside.
- fixed docs.
- implemented missing virsh domstats bits.

changes in v2: polishing and optimizations.
- incorporated feedback from Li Wei (thanks).
- added documentation.
- optimized block group to gather all the information with just
  one call to QEMU monitor.
- stripped to bare bones merged the 'block info' group into the
  'block' group - oVirt actually needs just one stat from there.
- reorganized the keys to be more consistent and shorter.

The patchset is organized as follows:
- the first patch enhance the internal stats gathering API
  to accomodate the needs of the groups which extract information
  using QEMU monitor jobs.
- the next 6 patches implement the bulk stats groups, extracting
  helpers where do refactoring to extract internal helpers every time
  it is feasible and convenient.
- the last patch enhance the virsh domstats command with options to
  use the new bulk stats.
*** BLURB HERE ***

Francesco Romani (8):
  qemu: bulk stats: extend internal collection API
  qemu: bulk stats: implement CPU stats group
  qemu: bulk stats: implement balloon group
  qemu: bulk stats: implement VCPU group
  qemu: bulk stats: implement interface group
  qemu: bulk stats: implement block group
  qemu: bulk stats: add block allocation information
  virsh: add options to query bulk stats group

 include/libvirt/libvirt.h.in |   5 +
 src/libvirt.c|  61 +
 src/qemu/qemu_driver.c   | 577 +--
 src/qemu/qemu_monitor.c  |  22 ++
 src/qemu/qemu_monitor.h  |  21 ++
 src/qemu/qemu_monitor_json.c | 211 +++-
 src/qemu/qemu_monitor_json.h |   4 +
 src/qemu/qemu_monitor_text.c |  13 +
 src/qemu/qemu_monitor_text.h |   4 +
 tools/virsh-domain-monitor.c |  35 +++
 tools/virsh.pod  |   4 +-
 11 files changed, 823 insertions(+), 134 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv3 6/8] qemu: bulk stats: implement block group

2014-09-08 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK
group of statistics.

To do so, an helper function to get the block stats
of all the disks of a domain is added.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  17 ++
 src/qemu/qemu_driver.c   |  93 +
 src/qemu/qemu_monitor.c  |  22 +++
 src/qemu/qemu_monitor.h  |  20 +++
 src/qemu/qemu_monitor_json.c | 135 ++-
 src/qemu/qemu_monitor_json.h |   4 ++
 src/qemu/qemu_monitor_text.c |  13 +
 src/qemu/qemu_monitor_text.h |   4 ++
 9 files changed, 269 insertions(+), 40 deletions(-)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 93aa1fb..c8e0089 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2515,6 +2515,7 @@ typedef enum {
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
+VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 8aa6cb1..e041fa2 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21596,6 +21596,23 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * net.num.tx.errs - transmission errors as long long.
  * net.num.tx.drop - transmit packets dropped as long long.
  *
+ * VIR_DOMAIN_STATS_BLOCK: Return block devices statistics.
+ * The typed parameter keys are in this format:
+ * block.count - number of block devices on this domain
+ * as unsigned int.
+ * block.num.name - name of the block device num as string.
+ *  matches the name of the block device.
+ * block.num.rd.reqs - number of read requests as long long.
+ * block.num.rd.bytes - number of read bytes as long long.
+ * block.num.rd.times - total time (ns) spent on reads as long long.
+ * block.num.wr.reqs - number of write requests as long long.
+ * block.num.wr.bytes - number of written bytes as long long.
+ * block.num.wr.times - total time (ns) spent on writes as long long.
+ * block.num.fl.reqs - total flush requests as long long.
+ * block.num.fl.times - total time (ns) spent on cache flushing
+ *  as long long.
+ * block.num.errors - Xen only: the 'oo_req' value as long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 989eb3e..93afb7e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9678,6 +9678,31 @@ qemuDomainBlockStats(virDomainPtr dom,
 return ret;
 }
 
+
+/*
+ * Returns at most the first `nstats' stats, then stops.
+ * Returns the number of stats filled.
+ */
+static int
+qemuDomainHelperGetBlockStats(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  qemuBlockStatsPtr stats,
+  int nstats)
+{
+int ret;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+qemuDomainObjEnterMonitor(driver, vm);
+
+ret = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
+  stats, nstats);
+
+qemuDomainObjExitMonitor(driver, vm);
+
+return ret;
+}
+
+
 static int
 qemuDomainBlockStatsFlags(virDomainPtr dom,
   const char *path,
@@ -17620,6 +17645,73 @@ qemuDomainGetStatsInterface(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_NET_PARAM
 
+#define QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ block.%lu.%s, num, name); \
+if (virTypedParamsAddLLong((record)-params, \
+   (record)-nparams, \
+   maxparams, \
+   param_name, \
+   value)  0) \
+goto cleanup; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlock(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+int ret = -1;
+int nstats = 0;
+qemuBlockStatsPtr stats = NULL;
+virQEMUDriverPtr driver = conn-privateData;
+
+if (VIR_ALLOC_N(stats, dom-def-ndisks)  0)
+return -1;
+
+nstats = qemuDomainHelperGetBlockStats(driver, dom, stats,
+   dom-def-ndisks);
+if (nstats  0)
+goto cleanup;
+
+QEMU_ADD_COUNT_PARAM(record, maxparams, block, dom-def-ndisks);
+
+for (i = 0; i  nstats; i

[libvirt] [PATCHv3 7/8] qemu: bulk stats: add block allocation information

2014-09-08 Thread Francesco Romani
Management software, want to be able to allocate disk space on demand.
To support this, they need keep track of the space occupation
of the block device.
This information is reported by qemu as part of block stats.

This patch extend the block information in the bulk stats with
the allocation information.

To keep the same behaviour, an helper is extracted from
qemuMonitorJSONGetBlockExtent in order to get per-device
allocation information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/libvirt.c|  2 ++
 src/qemu/qemu_driver.c   | 15 +
 src/qemu/qemu_monitor.h  |  1 +
 src/qemu/qemu_monitor_json.c | 76 
 4 files changed, 73 insertions(+), 21 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index e041fa2..ff8f891 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21612,6 +21612,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * block.num.fl.times - total time (ns) spent on cache flushing
  *  as long long.
  * block.num.errors - Xen only: the 'oo_req' value as long long.
+ * block.num.allocation - offset of the highest written sector
+ *as unsigned long long.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 93afb7e..86e7893 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17658,6 +17658,18 @@ do { \
 goto cleanup; \
 } while (0)
 
+#define QEMU_ADD_BLOCK_PARAM_ULL(RECORD, MAXPARAMS, NUM, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, block.%lu.%s, NUM, NAME); \
+if (virTypedParamsAddULLong(RECORD-params, \
+RECORD-nparams, \
+MAXPARAMS, \
+param_name, \
+VALUE)  0) \
+goto cleanup; \
+} while (0)
+
 static int
 qemuDomainGetStatsBlock(virConnectPtr conn ATTRIBUTE_UNUSED,
 virDomainObjPtr dom,
@@ -17701,6 +17713,9 @@ qemuDomainGetStatsBlock(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 fl.reqs, stats[i].flush_req);
 QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
 fl.times, stats[i].flush_total_times);
+
+QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, i,
+ allocation, stats[i].wr_highest_offset);
 }
 
 ret = 0;
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 8e3fb44..97d7336 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -358,6 +358,7 @@ struct _qemuBlockStats {
 long long wr_total_times;
 long long flush_req;
 long long flush_total_times;
+unsigned long long wr_highest_offset;
 };
 
 int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index aa95e71..bc5616d 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1776,6 +1776,55 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
 }
 
 
+/* This helper function could be called in the
+ * qemuMonitorJSONGetAllBlockStatsInfo
+ * path - which is used also by the
+ * qemuMonitorJSONGetBlockStatsInfo
+ * path. In this case, we don't know in advance if the wr_highest_offset
+ * field is there, so it is OK to fail silently.
+ * However, we can get here by the
+ * qemuMonitorJSONGetBlockExtent
+ * path, and in that case we _must_ fail loudly.
+ */
+static int
+qemuMonitorJSONDevGetBlockExtent(virJSONValuePtr dev,
+ bool report_error,
+ unsigned long long *extent)
+{
+virJSONValuePtr stats;
+virJSONValuePtr parent;
+
+if ((parent = virJSONValueObjectGet(dev, parent)) == NULL ||
+parent-type != VIR_JSON_TYPE_OBJECT) {
+if (report_error)
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(blockstats parent entry was not in 
+ expected format));
+return -1;
+}
+
+if ((stats = virJSONValueObjectGet(parent, stats)) == NULL ||
+stats-type != VIR_JSON_TYPE_OBJECT) {
+if (report_error)
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(blockstats stats entry was not in 
+ expected format));
+return -1;
+}
+
+if (virJSONValueObjectGetNumberUlong(stats, wr_highest_offset,
+ extent)  0) {
+if (report_error)
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _(cannot read %s statistic),
+   wr_highest_offset);
+return -1;
+}
+
+return 0;
+}
+
+
 int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr mon

[libvirt] [PATCHv3 3/8] qemu: bulk stats: implement balloon group

2014-09-08 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BALLOON
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  6 
 src/qemu/qemu_driver.c   | 70 
 3 files changed, 77 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index e6ed803..1e4e428 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2512,6 +2512,7 @@ struct _virDomainStatsRecord {
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
+VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 4d504ff..f21eb39 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21562,6 +21562,12 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * cpu.user - user cpu time spent as unsigned long long.
  * cpu.system - system cpu time spent as unsigned long long.
  *
+ * VIR_DOMAIN_STATS_BALLOON: Return memory balloon device information.
+ * The typed parameter keys are in this format:
+ * balloon.current - the memory in kiB currently used
+ * as unsigned long long.
+ * balloon.maximum - the maximum memory in kiB allowed
+ * as unsigned long long.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index cfc5941..4f8ccac 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -2520,6 +2520,47 @@ static int qemuDomainSendKey(virDomainPtr domain,
 return ret;
 }
 
+
+/*
+ * FIXME: this code is a stripped down version of what is done into
+ * qemuDomainGetInfo. Due to the different handling of jobs, it is not
+ * trivial to extract a common helper function.
+ */
+static void
+qemuDomainGetBalloonMemory(virQEMUDriverPtr driver, virDomainObjPtr vm,
+   unsigned long *memory)
+{
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if (vm-def-memballoon 
+vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE) {
+*memory = vm-def-mem.max_balloon;
+} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
+*memory = vm-def-mem.cur_balloon;
+} else {
+int err;
+unsigned long long balloon;
+
+qemuDomainObjEnterMonitor(driver, vm);
+err = qemuMonitorGetBalloonInfo(priv-mon, balloon);
+qemuDomainObjExitMonitor(driver, vm);
+
+if (err  0) {
+/* We couldn't get current memory allocation but that's not
+ * a show stopper; we wouldn't get it if there was a job
+ * active either
+ */
+*memory = vm-def-mem.cur_balloon;
+} else if (err == 0) {
+/* Balloon not supported, so maxmem is always the allocation */
+*memory = vm-def-mem.max_balloon;
+} else {
+*memory = balloon;
+}
+}
+}
+
+
 static int qemuDomainGetInfo(virDomainPtr dom,
  virDomainInfoPtr info)
 {
@@ -17387,6 +17428,34 @@ qemuDomainGetStatsCpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+static int
+qemuDomainGetStatsBalloon(virConnectPtr conn,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn-privateData;
+unsigned long cur_balloon = 0;
+
+qemuDomainGetBalloonMemory(driver, dom, cur_balloon);
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.current,
+cur_balloon)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.maximum,
+dom-def-mem.max_balloon)  0)
+return -1;
+
+return 0;
+}
 
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
@@ -17404,6 +17473,7 @@ struct qemuDomainGetStatsWorker {
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, false },
+{ qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON, true },
 { NULL, 0, false }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 8/8] virsh: add options to query bulk stats group

2014-09-08 Thread Francesco Romani
Exports to the domstats commands the new bulk stats groups.

Signed-off-by: Francesco Romani from...@redhat.com
---
 tools/virsh-domain-monitor.c | 35 +++
 tools/virsh.pod  |  4 +++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 055d8d2..d013ca8 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -1972,6 +1972,26 @@ static const vshCmdOptDef opts_domstats[] = {
  .type = VSH_OT_BOOL,
  .help = N_(report domain state),
 },
+{.name = cpu-total,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain physical cpu usage),
+},
+{.name = balloon,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain balloon statistics),
+},
+{.name = vcpu,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain virtual cpu information),
+},
+{.name = interface,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain network interface information),
+},
+{.name = block,
+ .type = VSH_OT_BOOL,
+ .help = N_(report domain block device statistics),
+},
 {.name = list-active,
  .type = VSH_OT_BOOL,
  .help = N_(list only active domains),
@@ -2063,6 +2083,21 @@ cmdDomstats(vshControl *ctl, const vshCmd *cmd)
 if (vshCommandOptBool(cmd, state))
 stats |= VIR_DOMAIN_STATS_STATE;
 
+if (vshCommandOptBool(cmd, cpu-total))
+stats |= VIR_DOMAIN_STATS_CPU_TOTAL;
+
+if (vshCommandOptBool(cmd, balloon))
+stats |= VIR_DOMAIN_STATS_BALLOON;
+
+if (vshCommandOptBool(cmd, vcpu))
+stats |= VIR_DOMAIN_STATS_VCPU;
+
+if (vshCommandOptBool(cmd, interface))
+stats |= VIR_DOMAIN_STATS_INTERFACE;
+
+if (vshCommandOptBool(cmd, block))
+stats |= VIR_DOMAIN_STATS_BLOCK;
+
 if (vshCommandOptBool(cmd, list-active))
 flags |= VIR_CONNECT_GET_ALL_DOMAINS_STATS_ACTIVE;
 
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 4401d55..ec07913 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -814,6 +814,7 @@ Isnapshot-create for disk snapshots) will accept either 
target
 or unique source names printed by this command.
 
 =item Bdomstats [I--raw] [I--enforce] [I--state]
+[I--cpu-total][I--balloon][I--vcpu][I--interface][I--block]
 [[I--list-active] [I--list-inactive] [I--list-persistent]
 [I--list-transient] [I--list-running] [I--list-paused]
 [I--list-shutoff] [I--list-other]] | [Idomain ...]
@@ -831,7 +832,8 @@ behavior use the I--raw flag.
 
 The individual statistics groups are selectable via specific flags. By
 default all supported statistics groups are returned. Supported
-statistics groups flags are: I--state.
+statistics groups flags are: I--state, I--cpu-total, I--balloon,
+I--vcpu, I--interface, I--block.
 
 Selecting a specific statistics groups doesn't guarantee that the
 daemon supports the selected group of stats. Flag I--enforce
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv3 2/8] qemu: bulk stats: implement CPU stats group

2014-09-08 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  9 
 src/qemu/qemu_driver.c   | 51 
 3 files changed, 61 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index aced31c..e6ed803 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
+VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 4806535..4d504ff 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21554,6 +21554,15 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * state.reason - reason for entering given state, returned as int from
  *  virDomain*Reason enum corresponding to given state.
  *
+ * VIR_DOMAIN_STATS_CPU_TOTAL: Return CPU statistics and usage information.
+ * The typed parameter keys are in this format:
+ * cpu.count - number as unsigned int of physical cpus available to
+ *   this domain.
+ * cpu.time - total cpu time spent for this domain as unsigned long long.
+ * cpu.user - user cpu time spent as unsigned long long.
+ * cpu.system - system cpu time spent as unsigned long long.
+ *
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2950a4b..cfc5941 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -96,6 +96,7 @@
 #include storage/storage_driver.h
 #include virhostdev.h
 #include domain_capabilities.h
+#include vircgroup.h
 
 #define VIR_FROM_THIS VIR_FROM_QEMU
 
@@ -17338,6 +17339,55 @@ qemuDomainGetStatsState(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 }
 
 
+static int
+qemuDomainGetStatsCpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom-privateData;
+unsigned long long cpu_time = 0;
+unsigned long long user_time = 0;
+unsigned long long sys_time = 0;
+int ncpus = 0;
+int err;
+
+ncpus = nodeGetCPUCount();
+if (ncpus  0 
+virTypedParamsAddUInt(record-params,
+  record-nparams,
+  maxparams,
+  cpu.count,
+  (unsigned int)ncpus)  0)
+return -1;
+
+err = virCgroupGetCpuacctUsage(priv-cgroup, cpu_time);
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.time,
+cpu_time)  0)
+return -1;
+
+err = virCgroupGetCpuacctStat(priv-cgroup, user_time, sys_time);
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.user,
+user_time)  0)
+return -1;
+if (!err  virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.system,
+sys_time)  0)
+return -1;
+
+return 0;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17353,6 +17403,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false },
+{ qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, false },
 { NULL, 0, false }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv3 5/8] qemu: bulk stats: implement interface group

2014-09-08 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_INTERFACE
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c| 14 +++
 src/qemu/qemu_driver.c   | 87 
 3 files changed, 102 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 68573a0..93aa1fb 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2514,6 +2514,7 @@ typedef enum {
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
+VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 0326847..8aa6cb1 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21582,6 +21582,20 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * as unsigned long long.
  * vcpu.num.cpu - physical CPU pinned to virtual CPU num as int.
  *
+ * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
+ * The typed parameter keys are in this format:
+ * net.count - number of network interfaces on this domain
+ *   as unsigned int.
+ * net.num.name - name of the interface num as string.
+ * net.num.rx.bytes - bytes received as long long.
+ * net.num.rx.pkts - packets received as long long.
+ * net.num.rx.errs - receive errors as long long.
+ * net.num.rx.drop - receive packets dropped as long long.
+ * net.num.tx.bytes - bytes transmitted as long long.
+ * net.num.tx.pkts - packets transmitted as long long.
+ * net.num.tx.errs - transmission errors as long long.
+ * net.num.tx.drop - transmit packets dropped as long long.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 6bcbfb5..989eb3e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17537,6 +17537,92 @@ qemuDomainGetStatsVcpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return ret;
 }
 
+#define QEMU_ADD_COUNT_PARAM(record, maxparams, type, count) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, %s.count, type); \
+if (virTypedParamsAddUInt((record)-params, \
+  (record)-nparams, \
+  maxparams, \
+  param_name, \
+  count)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NAME_PARAM(record, maxparams, type, num, name) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ %s.%lu.name, type, num); \
+if (virTypedParamsAddString((record)-params, \
+(record)-nparams, \
+maxparams, \
+param_name, \
+name)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NET_PARAM(record, maxparams, num, name, value) \
+do { \
+char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; \
+snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, \
+ net.%lu.%s, num, name); \
+if (virTypedParamsAddLLong((record)-params, \
+   (record)-nparams, \
+   maxparams, \
+   param_name, \
+   value)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsInterface(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+struct _virDomainInterfaceStats tmp;
+
+QEMU_ADD_COUNT_PARAM(record, maxparams, net, dom-def-nnets);
+
+/* Check the path is one of the domain's network interfaces. */
+for (i = 0; i  dom-def-nnets; i++) {
+memset(tmp, 0, sizeof(tmp));
+
+if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp)  0)
+continue;
+
+QEMU_ADD_NAME_PARAM(record, maxparams,
+net, i, dom-def-nets[i]-ifname);
+
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.bytes, tmp.rx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.pkts, tmp.rx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.errs, tmp.rx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.drop, tmp.rx_drop);
+QEMU_ADD_NET_PARAM(record, maxparams, i

Re: [libvirt] [PATCH 01/11] qemu: extract helper to get the current balloon

2014-09-04 Thread Francesco Romani
- Original Message -
 From: Francesco Romani from...@redhat.com
 To: libvir-list@redhat.com
 Sent: Wednesday, September 3, 2014 8:41:13 AM
 Subject: Re: [libvirt] [PATCH 01/11] qemu: extract helper to get the current 
 balloon

[...]

   +
   + cleanup:
   +if (vm)
   +virObjectUnlock(vm);
   +return ret;
   +}
  
  [3] Ouch.  This function is unlocking vm, even though it did not obtain
  the lock.  Which it kind of has to do because of the way that
  qemuDomainObjEndJob may end up invalidating vm.  While transfer
  semantics are workable, they require good comments at the start of the
  function, and making sure that the caller doesn't duplicate the efforts,
  nor forget anything else.
 
 Will add comment documenting this. Is this sufficient or there is something
 better I could do?

After some thought and an enlightening chat, turns out I can do better,
by changing a bit more qemuDomainGetStats:

* let qemuDomainGetStats handle the job
* let the helper run the monitor jobs inside the job provided by the caller
  (this requirement will be of course documented)

This way, we will have one will 
* have one job with multiple monitor jobs inside and
* avoid alltogether the awkward transfer semantings

And this should ultimately lead to both clearer and faster code.
So, I'll explore this direction first.

Moreover, after of course addressing all other comments, I'm going to squash
the patch which extracts the helper with the one which add the bulk stats group
which will make use of it, in order to make obvious how the helpers are going
to be used.

V3 with all the above will be posted ASAP.

Cheers,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 01/11] qemu: extract helper to get the current balloon

2014-09-03 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 2, 2014 11:01:25 PM
 Subject: Re: [libvirt] [PATCH 01/11] qemu: extract helper to get the current 
 balloon

Hi Eric, thanks for the review(s).

 On 09/02/2014 06:31 AM, Francesco Romani wrote:
  Refactor the code to extract an helper method
  to get the current balloon settings.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   src/qemu/qemu_driver.c | 98
   ++
   1 file changed, 60 insertions(+), 38 deletions(-)
  
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index 239a300..bbd16ed 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -168,6 +168,9 @@ static int qemuOpenFileAs(uid_t fallback_uid, gid_t
  fallback_gid,
 const char *path, int oflags,
 bool *needUnlink, bool *bypassSecurityDriver);
   
  +static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
  +  virDomainObjPtr vm,
  +  unsigned long *memory);
 
 Forward declarations of non-recursive static functions is usually a sign
 that you didn't topologically sort your code correctly.  Just  implement
 the function here, instead of later on.

Will do.

 
   
   virQEMUDriverPtr qemu_driver = NULL;
   
  @@ -2519,6 +2522,60 @@ static int qemuDomainSendKey(virDomainPtr domain,
   return ret;
   }
   
  +static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
  +  virDomainObjPtr vm,
  +  unsigned long *memory)
 
 Libvirt style is tending towards two blank lines between functions,

My mistake. I did run 'make syntax-check', I wonder if that was supposed to 
catch
this.

 and return type on separate line (although we don't enforce either of these
 yet, due to the large existing code base that used other styles), as in:
 static int
 qemuDomainGetBalloonMemory(virQEMUDriverPtr driver, ...

I was a bit confused by the mixed styles found in the code.
Will stick with the one you pointed out in this and in future patches.

 
  +{
  +int ret = -1;
  +int err = 0;
  +qemuDomainObjPrivatePtr priv = vm-privateData;
  +
  +if ((vm-def-memballoon != NULL) 
  +(vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE))
  {
 
 [1] Over-parenthesized.  Sufficient to write:
 
 if (vm-def-memballoon 
 vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE) {

Will change.

 
  +*memory = vm-def-mem.max_balloon;
  +} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
  +*memory = vm-def-mem.cur_balloon;
  +} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
  +unsigned long long balloon;
  +
  +if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
  +goto cleanup;
  +if (!virDomainObjIsActive(vm))
  +err = 0;
  +else {
 
 [2] If one leg of if-else has {}, both legs must have it.  This is
 documented in HACKING (and I really ought to add a syntax check that
 forbids obvious cases of unbalanced braces).

I must have missed. Will fix.

  
  +
  + cleanup:
  +if (vm)
  +virObjectUnlock(vm);
  +return ret;
  +}
 
 [3] Ouch.  This function is unlocking vm, even though it did not obtain
 the lock.  Which it kind of has to do because of the way that
 qemuDomainObjEndJob may end up invalidating vm.  While transfer
 semantics are workable, they require good comments at the start of the
 function, and making sure that the caller doesn't duplicate the efforts,
 nor forget anything else.

Will add comment documenting this. Is this sufficient or there is something
better I could do?

 
  @@ -2526,7 +2583,6 @@ static int qemuDomainGetInfo(virDomainPtr dom,
   virDomainObjPtr vm;
   int ret = -1;
   int err;
  -unsigned long long balloon;
   
   if (!(vm = qemuDomObjFromDomain(dom)))
   goto cleanup;
  @@ -2549,43 +2605,9 @@ static int qemuDomainGetInfo(virDomainPtr dom,
   info-maxMem = vm-def-mem.max_balloon;
   
   if (virDomainObjIsActive(vm)) {
  -qemuDomainObjPrivatePtr priv = vm-privateData;
  -
  -if ((vm-def-memballoon != NULL) 
  -(vm-def-memballoon-model ==
  VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
 
 [1] Then again, your parenthesis...
 
  -info-memory = vm-def-mem.max_balloon;
  -} else if (virQEMUCapsGet(priv-qemuCaps,
  QEMU_CAPS_BALLOON_EVENT)) {
  -info-memory = vm-def-mem.cur_balloon;
  -} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
  -if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
  -goto cleanup;
  -if (!virDomainObjIsActive(vm))
  -err = 0;
  -else

Re: [libvirt] [PATCH 02/11] qemu: extract helper to gather vcpu data

2014-09-03 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, September 2, 2014 11:42:09 PM
 Subject: Re: [libvirt] [PATCH 02/11] qemu: extract helper to gather vcpu data
 
 On 09/02/2014 06:31 AM, Francesco Romani wrote:
  Extracts an helper to gether the VCpu
 
 s/an/a/
 s/gether/gather/

Will fix,

   virQEMUDriverPtr qemu_driver = NULL;
   
   
  @@ -4974,10 +4980,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
  int maplen)
   {
   virDomainObjPtr vm;
  -size_t i;
  -int v, maxcpu, hostcpus;
   int ret = -1;
  -qemuDomainObjPrivatePtr priv;
   
   if (!(vm = qemuDomObjFromDomain(dom)))
   goto cleanup;
  @@ -4992,7 +4995,25 @@ qemuDomainGetVcpus(virDomainPtr dom,
   goto cleanup;
   }
   
  -priv = vm-privateData;
  +ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen);
  +
  + cleanup:
  +if (vm)
  +virObjectUnlock(vm);
 
 Ouch.  You have a double free.  This frees vm, even though it was calling...
 
  +return ret;
  +}
  +
  +static int
  +qemuDomainHelperGetVcpus(virDomainObjPtr vm,
  + virVcpuInfoPtr info,
  + int maxinfo,
  + unsigned char *cpumaps,
  + int maplen)
  +{
  +int ret = -1;
  +int v, maxcpu, hostcpus;
  +size_t i;
  +qemuDomainObjPrivatePtr priv = vm-privateData;
 
 ...a function that now has transfer semantics.  But unlike patch 1,
 where transfer semantics were necessary because of the way you drop lock
 in order to do a monitor call, this patch appears to not need them; and
 the solution is to just sanitize the cleanup label (at which point it
 becomes a mere 'return ret', so you could then replace all 'goto
 cleanup' with a direct return).

Thanks, will fix.

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv2 03/11] qemu: add helper to get the block stats

2014-09-03 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Wednesday, September 3, 2014 12:14:58 AM
 Subject: Re: [libvirt] [PATCHv2 03/11] qemu: add helper to get the block  
 stats
 
 On 09/02/2014 06:31 AM, Francesco Romani wrote:
  Add an helper function to get the block stats
  of a disk.
  This helper is meant to be used by the bulk stats API.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   src/qemu/qemu_driver.c   |  41 +++
   src/qemu/qemu_monitor.c  |  23 +
   src/qemu/qemu_monitor.h  |  18 +++
   src/qemu/qemu_monitor_json.c | 118
   +--
   src/qemu/qemu_monitor_json.h |   4 ++
   5 files changed, 165 insertions(+), 39 deletions(-)
  
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index 1842e60..39e2c1b 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -178,6 +178,12 @@ static int qemuDomainHelperGetVcpus(virDomainObjPtr
  vm,
   unsigned char *cpumaps,
   int maplen);
   
  +static int qemuDomainGetBlockStats(virQEMUDriverPtr driver,
  +   virDomainObjPtr vm,
  +   struct qemuBlockStats *stats,
  +   int nstats);
  +
  +
 
 Another forward declaration to be avoided.
 
 Why do you need 'struct qemuBlockStats' in the declaration? Did you
 forget a typedef somewhere?

I saw an internal struct typedef-less (qemuAutostartData maybe?)
and this somehow got stuck on my mind.
But I believe modern libvirt style mandates typedef, is that correct?
I'll amend my code accordingly.

  +static int
  +qemuDomainGetBlockStats(virQEMUDriverPtr driver,
  +virDomainObjPtr vm,
  +struct qemuBlockStats *stats,
  +int nstats)
  +{
  +int ret = -1;
  +qemuDomainObjPrivatePtr priv;
  +
  +if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
  +goto cleanup;
  +
  +priv = vm-privateData;
  +
  +qemuDomainObjEnterMonitor(driver, vm);
 
 Missing a call to check if the domain is still running.  The mere act of
 calling qemuDomainObjBeginJob causes us to temporarily drop locks, and
 while the locks are down, the VM can shutdown independently and that
 makes the monitor go away; but qemuDomainObjEnterMonitor is only safe to
 call if we know the monitor still exists.  There should be plenty of
 examples to copy from.

Thanks for the explanation, will add the check.

  +
  +ret = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
  +  stats, nstats);
  +
  +qemuDomainObjExitMonitor(driver, vm);
  +
  +if (!qemuDomainObjEndJob(driver, vm))
  +vm = NULL;
  +
  + cleanup:
  +if (vm)
  +virObjectUnlock(vm);
 
 Another case of required transfer semantics.
 
 Is this patch complete? I don't see any caller of the new
 qemuDomainGetBlockStats, and gcc generally gives a compiler warning
 (which then turns into a failed build due to -Werror) if you have an
 unused static function.

It does. I (wrongly) thought it is easier/better to have little patches
so one which add code, one which makes use of it (10/11 in this series).
Will squash with the dependent patch on the next submission.

  +int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
 
 Style: two blank lines between functions, return type on its own line.

Oops. Will fix.
 
  +const char *dev_name,
  +struct qemuBlockStats *stats,
  +int nstats)
  +{
  +int ret;
  +VIR_DEBUG(mon=%p dev=%s, mon, dev_name);
  +
  +if (!mon) {
  +virReportError(VIR_ERR_INVALID_ARG, %s,
  +   _(monitor must not be NULL));
  +return -1;
  +}
 
 This if can be nuked if you'd just mark the mon parameter as
 ATTRIBUTE_NONNULL (all our callers use it correctly, after all).

Will do.
 
  +
  +if (mon-json)
  +ret = qemuMonitorJSONGetAllBlockStatsInfo(mon, dev_name,
  +  stats, nstats);
  +else
  +ret = -1; /* not supported */
 
 Returning -1 without printing an error message is bad.

Will add.

  +
  +struct qemuBlockStats {
  +long long rd_req;
  +long long rd_bytes;
  +long long wr_req;
  +long long wr_bytes;
  +long long rd_total_times;
  +long long wr_total_times;
  +long long flush_req;
  +long long flush_total_times;
  +long long errs; /* meaningless for QEMU */
 
 Umm, why do we need an 'errs' parameter, if it is meaningless?  I can
 see that this is sort of a superset of the public virDomainBlockStats
 struct, but that struct is generic to multiple hypervisors; and it also
 looks like

Re: [libvirt] [PATCHv2 00/11] bulk stats: QEMU implementation

2014-09-03 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Wednesday, September 3, 2014 1:15:38 AM
 Subject: Re: [libvirt] [PATCHv2 00/11] bulk stats: QEMU implementation
 
 On 09/02/2014 06:31 AM, Francesco Romani wrote:
  This patchset enhances the QEMU support
  for the new bulk stats API to include
  equivalents of these APIs:
  
  virDomainBlockInfo
  virDomainGetInfo - for balloon stats
  virDomainGetCPUStats
  virDomainBlockStatsFlags
  virDomainInterfaceStats
  virDomainGetVcpusFlags
  virDomainGetVcpus
  
  This subset of API is the one oVirt relies on.
  Scale/stress test on an oVirt test environment is in progress.
  
  changes in v2: polishing and optimizations.
  - incorporated feedback from Li Wei (thanks)
  - added documentation
  - optimized block group to gather all the information with just
one call to QEMU monitor
  - stripped to bare bones merged the 'block info' group into the
'block' group - oVirt actually needs just one stat from there
  - reorganized the keys to be more consistent and shorter.
 
 Missing is virsh exposure of the new stat groups (Li's series gave an
 example for adding --block).

Sure, will add with a followup patch.
One question, though.
Do we want default to expose nothing, and options to add groups, is this
correct?

So usage would be something like

virsh domstats  # equivalent to --all , I suppose
virsh domstats --block  # only block group
virsh domstats --interface --vcpu  # only vcpu and interface

Thanks and bests,


-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv2 08/11] qemu: bulk stats: implement VCPU group

2014-09-03 Thread Francesco Romani
- Original Message -
 From: Eric Blake ebl...@redhat.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Wednesday, September 3, 2014 1:00:53 AM
 Subject: Re: [libvirt] [PATCHv2 08/11] qemu: bulk stats: implement VCPU group
 
 On 09/02/2014 06:31 AM, Francesco Romani wrote:
  This patch implements the VIR_DOMAIN_STATS_VCPU
  group of statistics.
  
  Signed-off-by: Francesco Romani from...@redhat.com
  ---
   include/libvirt/libvirt.h.in |  1 +
   src/libvirt.c|  8 +
   src/qemu/qemu_driver.c   | 72
   
   3 files changed, 81 insertions(+)
  
 
*
  + * VIR_DOMAIN_STATS_VCPU: Return virtual CPU statistics.
  + * The typed parameter keys are in this format:
  + * vcpu.current - current number of online virtual CPUs
  + * vcpu.maximum - maximum number of online virtual CPUs
  + * vcpu.num.state - state of the virtual CPU num
 
 Is this an int mapping to some particular enum?

Yep: virVcpuState. Will document this.

  + * vcpu.num.time - virtual cpu time spent by virtual CPU num
  + * vcpu.num.cpu - physical CPU pinned to virtual CPU num
 
 Missing types.
 
 Should there be a parameter vcpu.count that says how many vcpu.num
 entries to expect?  Or is that vcpu.current?  Or do we have a situation
 where if cpus 0 and 2 are online but 1 and 3 are offline, then we have
 vcpu.0.x and vcpu2.x but not vcpu1.x?

Yes, the latter, due to VCPU hot(un)plugging

 A bit more documentation will help the user deciding which array
 entries to expect, and whether the array will be sparse if cpus are
 offline.

Will document that the array will be up to vcpu.maximum items, could
be sparse and the actual size will be of vcpu.current items.

  +{
  +size_t i;
  +int ret = -1;
  +char param_name[NAME_MAX];
 
 NAME_MAX (typically 256) is huge, compared to
 VIR_TYPED_PARAM_FIELD_LENGTH (80).

Will switch to VIR_TYPED_PARAM_FIELD_LENGTH here and everywhere I used
NAME_MAX.

Thanks,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 04/11] qemu: report highest offset into block stats

2014-09-02 Thread Francesco Romani
This patch adds the reporting of the highest
block stats written to the block stats.

This information was already there since the
beginning, and doing so we gain the information
provided by GetBlockInfo without entering the
monitor again.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_monitor.h  | 1 +
 src/qemu/qemu_monitor_json.c | 7 +++
 2 files changed, 8 insertions(+)

diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 1b7d00b..8e16c7d 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -357,6 +357,7 @@ struct qemuBlockStats {
 long long flush_req;
 long long flush_total_times;
 long long errs; /* meaningless for QEMU */
+unsigned long long wr_highest_offset;
 };
 
 int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 68c5cf8..31ffb6c 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1880,6 +1880,13 @@ int qemuMonitorJSONGetAllBlockStatsInfo(qemuMonitorPtr 
mon,
flush_total_time_ns);
 goto cleanup;
 }
+if (virJSONValueObjectGetNumberUlong(stats, wr_highest_offset, 
blockstats-wr_highest_offset)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _(cannot read %s statistic),
+   wr_highest_offset);
+goto cleanup;
+}
+
 blockstats-errs = -1; /* meaningless for QEMU */
 
 ret++;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv2 00/11] bulk stats: QEMU implementation

2014-09-02 Thread Francesco Romani
This patchset enhances the QEMU support
for the new bulk stats API to include
equivalents of these APIs:

virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus

This subset of API is the one oVirt relies on.
Scale/stress test on an oVirt test environment is in progress.

changes in v2: polishing and optimizations.
- incorporated feedback from Li Wei (thanks)
- added documentation
- optimized block group to gather all the information with just
  one call to QEMU monitor
- stripped to bare bones merged the 'block info' group into the
  'block' group - oVirt actually needs just one stat from there
- reorganized the keys to be more consistent and shorter.

The patchset is organized as follows:
- the first 4 patches do refactoring to extract internal helper
  functions to be used by the old API and by the new bulk one.
  For block stats on helper is actually added instead of extracted.
- since some groups require access to the QEMU monitor, one patch
  extend the internal interface to easily accomodate that
- finally, the last six patches implement the support for the
  bulk API.

Francesco Romani (11):
  qemu: extract helper to get the current balloon
  qemu: extract helper to gather vcpu data
  qemu: add helper to get the block stats
  qemu: report highest offset into block stats
  qemu: bulk stats: pass connection to workers
  qemu: bulk stats: implement CPU stats group
  qemu: bulk stats: implement balloon group
  qemu: bulk stats: implement VCPU group
  qemu: bulk stats: implement interface group
  qemu: bulk stats: implement block group
  qemu: bulk stats: add block allocation information

 include/libvirt/libvirt.h.in |   5 +
 src/libvirt.c|  47 
 src/qemu/qemu_driver.c   | 500 +++
 src/qemu/qemu_monitor.c  |  23 ++
 src/qemu/qemu_monitor.h  |  19 ++
 src/qemu/qemu_monitor_json.c | 125 +++
 src/qemu/qemu_monitor_json.h |   4 +
 7 files changed, 639 insertions(+), 84 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 01/11] qemu: extract helper to get the current balloon

2014-09-02 Thread Francesco Romani
Refactor the code to extract an helper method
to get the current balloon settings.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 98 ++
 1 file changed, 60 insertions(+), 38 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 239a300..bbd16ed 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -168,6 +168,9 @@ static int qemuOpenFileAs(uid_t fallback_uid, gid_t 
fallback_gid,
   const char *path, int oflags,
   bool *needUnlink, bool *bypassSecurityDriver);
 
+static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  unsigned long *memory);
 
 virQEMUDriverPtr qemu_driver = NULL;
 
@@ -2519,6 +2522,60 @@ static int qemuDomainSendKey(virDomainPtr domain,
 return ret;
 }
 
+static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  unsigned long *memory)
+{
+int ret = -1;
+int err = 0;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if ((vm-def-memballoon != NULL) 
+(vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
+*memory = vm-def-mem.max_balloon;
+} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
+*memory = vm-def-mem.cur_balloon;
+} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
+unsigned long long balloon;
+
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
+goto cleanup;
+if (!virDomainObjIsActive(vm))
+err = 0;
+else {
+qemuDomainObjEnterMonitor(driver, vm);
+err = qemuMonitorGetBalloonInfo(priv-mon, balloon);
+qemuDomainObjExitMonitor(driver, vm);
+}
+if (!qemuDomainObjEndJob(driver, vm)) {
+vm = NULL;
+goto cleanup;
+}
+
+if (err  0) {
+/* We couldn't get current memory allocation but that's not
+ * a show stopper; we wouldn't get it if there was a job
+ * active either
+ */
+*memory = vm-def-mem.cur_balloon;
+} else if (err == 0) {
+/* Balloon not supported, so maxmem is always the allocation */
+*memory = vm-def-mem.max_balloon;
+} else {
+*memory = balloon;
+}
+} else {
+*memory = vm-def-mem.cur_balloon;
+}
+
+ret = 0;
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
 static int qemuDomainGetInfo(virDomainPtr dom,
  virDomainInfoPtr info)
 {
@@ -2526,7 +2583,6 @@ static int qemuDomainGetInfo(virDomainPtr dom,
 virDomainObjPtr vm;
 int ret = -1;
 int err;
-unsigned long long balloon;
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 goto cleanup;
@@ -2549,43 +2605,9 @@ static int qemuDomainGetInfo(virDomainPtr dom,
 info-maxMem = vm-def-mem.max_balloon;
 
 if (virDomainObjIsActive(vm)) {
-qemuDomainObjPrivatePtr priv = vm-privateData;
-
-if ((vm-def-memballoon != NULL) 
-(vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
-info-memory = vm-def-mem.max_balloon;
-} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
-info-memory = vm-def-mem.cur_balloon;
-} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
-if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
-goto cleanup;
-if (!virDomainObjIsActive(vm))
-err = 0;
-else {
-qemuDomainObjEnterMonitor(driver, vm);
-err = qemuMonitorGetBalloonInfo(priv-mon, balloon);
-qemuDomainObjExitMonitor(driver, vm);
-}
-if (!qemuDomainObjEndJob(driver, vm)) {
-vm = NULL;
-goto cleanup;
-}
-
-if (err  0) {
-/* We couldn't get current memory allocation but that's not
- * a show stopper; we wouldn't get it if there was a job
- * active either
- */
-info-memory = vm-def-mem.cur_balloon;
-} else if (err == 0) {
-/* Balloon not supported, so maxmem is always the allocation */
-info-memory = vm-def-mem.max_balloon;
-} else {
-info-memory = balloon;
-}
-} else {
-info-memory = vm-def-mem.cur_balloon;
-}
+err = qemuDomainGetBalloonMemory(driver, vm, info-memory);
+if (err)
+return err;
 } else {
 info-memory = 0;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com

[libvirt] [PATCH 05/11] qemu: bulk stats: pass connection to workers

2014-09-02 Thread Francesco Romani
Future patches which will implement more
bulk stats groups for QEMU will need to access
the connection object, so enrich the worker
prototype.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 39e2c1b..a9f6821 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17282,7 +17282,8 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
 
 
 static int
-qemuDomainGetStatsState(virDomainObjPtr dom,
+qemuDomainGetStatsState(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
 virDomainStatsRecordPtr record,
 int *maxparams,
 unsigned int privflags ATTRIBUTE_UNUSED)
@@ -17304,9 +17305,9 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 return 0;
 }
 
-
 typedef int
-(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+(*qemuDomainGetStatsFunc)(virConnectPtr conn,
+  virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
   int *maxparams,
   unsigned int flags);
@@ -17367,7 +17368,7 @@ qemuDomainGetStats(virConnectPtr conn,
 
 for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
 if (stats  qemuDomainGetStatsWorkers[i].stats) {
-if (qemuDomainGetStatsWorkers[i].func(dom, tmp, maxparams,
+if (qemuDomainGetStatsWorkers[i].func(conn, dom, tmp, maxparams,
   flags)  0)
 goto cleanup;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv2 06/11] qemu: bulk stats: implement CPU stats group

2014-09-02 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  8 +++
 src/qemu/qemu_driver.c   | 56 
 3 files changed, 65 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index a64f597..69ad152 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
+VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 5d8f01c..c6556ea 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21546,6 +21546,14 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * state.reason - reason for entering given state, returned as int from
  *  virDomain*Reason enum corresponding to given state.
  *
+ * VIR_DOMAIN_STATS_CPU_TOTAL: Return CPU statistics and usage informations.
+ * The typed parameter keys are in this format:
+ * cpu.count - number of physical cpu available to this domain.
+ * cpu.time - total cpu time spent for this domain
+ * cpu.user - user cpu time spent
+ * cpu.system - system cpu time spent
+ *
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index a9f6821..2ced593 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -96,6 +96,7 @@
 #include storage/storage_driver.h
 #include virhostdev.h
 #include domain_capabilities.h
+#include vircgroup.h
 
 #define VIR_FROM_THIS VIR_FROM_QEMU
 
@@ -17305,6 +17306,60 @@ qemuDomainGetStatsState(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+
+static int
+qemuDomainGetStatsCpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom-privateData;
+unsigned long long cpu_time = 0;
+unsigned long long user_time = 0;
+unsigned long long sys_time = 0;
+int ncpus = 0;
+
+ncpus = nodeGetCPUCount();
+
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ cpu.count,
+ ncpus)  0)
+return -1;
+
+if (virCgroupGetCpuacctUsage(priv-cgroup, cpu_time)  0)
+return -1;
+
+if (virCgroupGetCpuacctStat(priv-cgroup, user_time, sys_time)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.time,
+cpu_time)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.user,
+user_time)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.system,
+sys_time)  0)
+return -1;
+
+return 0;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17319,6 +17374,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
+{ qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 02/11] qemu: extract helper to gather vcpu data

2014-09-02 Thread Francesco Romani
Extracts an helper to gether the VCpu
information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 29 +
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index bbd16ed..1842e60 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -172,6 +172,12 @@ static int qemuDomainGetBalloonMemory(virQEMUDriverPtr 
driver,
   virDomainObjPtr vm,
   unsigned long *memory);
 
+static int qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+virVcpuInfoPtr info,
+int maxinfo,
+unsigned char *cpumaps,
+int maplen);
+
 virQEMUDriverPtr qemu_driver = NULL;
 
 
@@ -4974,10 +4980,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
int maplen)
 {
 virDomainObjPtr vm;
-size_t i;
-int v, maxcpu, hostcpus;
 int ret = -1;
-qemuDomainObjPrivatePtr priv;
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 goto cleanup;
@@ -4992,7 +4995,25 @@ qemuDomainGetVcpus(virDomainPtr dom,
 goto cleanup;
 }
 
-priv = vm-privateData;
+ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen);
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
+static int
+qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+ virVcpuInfoPtr info,
+ int maxinfo,
+ unsigned char *cpumaps,
+ int maplen)
+{
+int ret = -1;
+int v, maxcpu, hostcpus;
+size_t i;
+qemuDomainObjPrivatePtr priv = vm-privateData;
 
 if ((hostcpus = nodeGetCPUCount())  0)
 goto cleanup;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv2 10/11] qemu: bulk stats: implement block group

2014-09-02 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c| 13 +
 src/qemu/qemu_driver.c   | 65 
 3 files changed, 79 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 33588d6..1d90f5e 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2515,6 +2515,7 @@ typedef enum {
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
+VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 099404b..cabfb91 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21579,6 +21579,19 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * net.num.tx.errs - transmission errors.
  * net.num.tx.drop - transmit packets dropped.
  *
+ * VIR_DOMAIN_STATS_BLOCK: Return block devices statistics.
+ * The typed paramer keys are in this format:
+ * block.count - number of block devices on this domain.
+ * block.num.name - name of the block device num.
+ * block.num.rd.reqs - number of read requests.
+ * block.num.rd.bytes - number of read bytes.
+ * block.num.rd.times - total time (ns) spent on reads.
+ * block.num.wr.reqs - number of write requests
+ * block.num.wr.bytes - number of written bytes.
+ * block.num.wr.times - total time (ns) spent on writes.
+ * block.num.fl.reqs - total flush requests
+ * block.num.fl.times - total time (ns) spent on cache flushing
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 069a15d..977e8c7 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17542,6 +17542,70 @@ qemuDomainGetStatsInterface(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_NET_PARAM
 
+#define QEMU_ADD_BLOCK_PARAM_LL(RECORD, MAXPARAMS, NUM, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, block.%lu.%s, NUM, NAME); \
+if (virTypedParamsAddLLong(RECORD-params, \
+   RECORD-nparams, \
+   MAXPARAMS, \
+   param_name, \
+   VALUE)  0) \
+goto cleanup; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlock(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+int ret = -1;
+int nstats = dom-def-ndisks;
+struct qemuBlockStats *stats = NULL;
+virQEMUDriverPtr driver = conn-privateData;
+
+if (VIR_ALLOC_N(stats, nstats)  0)
+return -1;
+
+if (qemuDomainGetBlockStats(driver, dom, stats, nstats) != nstats)
+goto cleanup;
+
+for (i = 0; i  dom-def-ndisks; i++) {
+QEMU_ADD_COUNT_PARAM(record, maxparams, block, dom-def-ndisks);
+
+QEMU_ADD_NAME_PARAM(record, maxparams,
+block, i, dom-def-disks[i]-dst);
+
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+rd.reqs, stats[i].rd_req);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+rd.bytes, stats[i].rd_bytes);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+rd.times, stats[i].rd_total_times);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+wr.reqs, stats[i].wr_req);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+wr.bytes, stats[i].wr_bytes);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+wr.times, stats[i].wr_total_times);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+fl.reqs, stats[i].flush_req);
+QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
+fl.times, stats[i].flush_total_times);
+}
+
+ret = 0;
+
+ cleanup:
+VIR_FREE(stats);
+return ret;
+}
+
+#undef QEMU_ADD_BLOCK_PARAM
+
 #undef QEMU_ADD_NAME_PARAM
 
 #undef QEMU_ADD_COUNT_PARAM
@@ -17564,6 +17628,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
+{ qemuDomainGetStatsBlock, VIR_DOMAIN_STATS_BLOCK },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list

[libvirt] [PATCHv2 09/11] qemu: bulk stats: implement interface group

2014-09-02 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_INTERFACE
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c| 13 +++
 src/qemu/qemu_driver.c   | 85 
 3 files changed, 99 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 46f4067..33588d6 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2514,6 +2514,7 @@ typedef enum {
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
+VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index a5942bc..099404b 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21566,6 +21566,19 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * vcpu.num.time - virtual cpu time spent by virtual CPU num
  * vcpu.num.cpu - physical CPU pinned to virtual CPU num
  *
+ * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics.
+ * The typed parameter keys are in this format:
+ * net.count - number of network interfaces on this domain.
+ * net.num.name - name of the interface num.
+ * net.num.rx.bytes - bytes received.
+ * net.num.rx.pkts - packets received.
+ * net.num.rx.errs - receive errors.
+ * net.num.rx.drop - receive packets dropped.
+ * net.num.tx.bytes - bytes transmitted.
+ * net.num.tx.pkts - packets transmitted.
+ * net.num.tx.errs - transmission errors.
+ * net.num.tx.drop - transmit packets dropped.
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 72ec284..069a15d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17461,6 +17461,90 @@ qemuDomainGetStatsVcpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return ret;
 }
 
+#define QEMU_ADD_COUNT_PARAM(RECORD, MAXPARAMS, TYPE, COUNT) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, %s.count, TYPE); \
+if (virTypedParamsAddUInt(RECORD-params, \
+  RECORD-nparams, \
+  MAXPARAMS, \
+  param_name, \
+  COUNT)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NAME_PARAM(RECORD, MAXPARAMS, TYPE, NUM, NAME) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, %s.%lu.name, TYPE, NUM); \
+if (virTypedParamsAddString(RECORD-params, \
+RECORD-nparams, \
+MAXPARAMS, \
+param_name, \
+NAME)  0) \
+return -1; \
+} while (0)
+
+#define QEMU_ADD_NET_PARAM(RECORD, MAXPARAMS, NUM, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, net.%lu.%s, NUM, NAME); \
+if (virTypedParamsAddLLong(RECORD-params, \
+   RECORD-nparams, \
+   MAXPARAMS, \
+   param_name, \
+   VALUE)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsInterface(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+struct _virDomainInterfaceStats tmp;
+
+/* Check the path is one of the domain's network interfaces. */
+for (i = 0; i  dom-def-nnets; i++) {
+memset(tmp, 0, sizeof(tmp));
+
+if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp)  0)
+continue;
+
+QEMU_ADD_COUNT_PARAM(record, maxparams, net, dom-def-nnets);
+
+QEMU_ADD_NAME_PARAM(record, maxparams,
+net, i, dom-def-nets[i]-ifname);
+
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.bytes, tmp.rx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.pkts, tmp.rx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.errs, tmp.rx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   rx.drop, tmp.rx_drop);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   tx.bytes, tmp.tx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   tx.pkts, tmp.tx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, i,
+   tx.errs, tmp.tx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, i

[libvirt] [PATCHv2 08/11] qemu: bulk stats: implement VCPU group

2014-09-02 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_VCPU
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  8 +
 src/qemu/qemu_driver.c   | 72 
 3 files changed, 81 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 7ec57cd..46f4067 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2513,6 +2513,7 @@ typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
+VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 8b0f589..a5942bc 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21558,6 +21558,14 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * balloon.current - the memory in KBytes currently used
  * balloon.maximum - the maximum memory in KBytes allowed
  *
+ * VIR_DOMAIN_STATS_VCPU: Return virtual CPU statistics.
+ * The typed parameter keys are in this format:
+ * vcpu.current - current number of online virtual CPUs
+ * vcpu.maximum - maximum number of online virtual CPUs
+ * vcpu.num.state - state of the virtual CPU num
+ * vcpu.num.time - virtual cpu time spent by virtual CPU num
+ * vcpu.num.cpu - physical CPU pinned to virtual CPU num
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 98f1a31..72ec284 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17391,6 +17391,77 @@ qemuDomainGetStatsBalloon(virConnectPtr conn,
 return 0;
 }
 
+
+static int
+qemuDomainGetStatsVcpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+   virDomainObjPtr dom,
+   virDomainStatsRecordPtr record,
+   int *maxparams,
+   unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+int ret = -1;
+char param_name[NAME_MAX];
+virVcpuInfoPtr cpuinfo = NULL;
+
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ vcpu.current,
+ dom-def-vcpus)  0)
+return -1;
+
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ vcpu.maximum,
+ dom-def-maxvcpus)  0)
+return -1;
+
+if (VIR_ALLOC_N(cpuinfo, dom-def-vcpus)  0)
+return -1;
+
+if ((ret = qemuDomainHelperGetVcpus(dom,
+cpuinfo,
+dom-def-vcpus,
+NULL,
+0))  0)
+goto cleanup;
+
+for (i = 0; i  dom-def-vcpus; i++) {
+snprintf(param_name, NAME_MAX, vcpu.%u.state, cpuinfo[i].number);
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].state)  0)
+goto cleanup;
+
+snprintf(param_name, NAME_MAX, vcpu.%u.time, cpuinfo[i].number);
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+param_name,
+cpuinfo[i].cpuTime)  0)
+goto cleanup;
+
+snprintf(param_name, NAME_MAX, vcpu.%u.cpu, cpuinfo[i].number);
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].cpu)  0)
+goto cleanup;
+}
+
+ret = 0;
+
+ cleanup:
+VIR_FREE(cpuinfo);
+return ret;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17407,6 +17478,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
+{ qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv2 07/11] qemu: bulk stats: implement balloon group

2014-09-02 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BALLOON
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/libvirt.c|  4 
 src/qemu/qemu_driver.c   | 32 
 3 files changed, 37 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 69ad152..7ec57cd 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2512,6 +2512,7 @@ struct _virDomainStatsRecord {
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
+VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index c6556ea..8b0f589 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21553,6 +21553,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * cpu.user - user cpu time spent
  * cpu.system - system cpu time spent
  *
+ * VIR_DOMAIN_STATS_BALLOON: Return memory balloon device informations.
+ * The typed parameter keys are in this format:
+ * balloon.current - the memory in KBytes currently used
+ * balloon.maximum - the maximum memory in KBytes allowed
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2ced593..98f1a31 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17359,6 +17359,37 @@ qemuDomainGetStatsCpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+static int
+qemuDomainGetStatsBalloon(virConnectPtr conn,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn-privateData;
+unsigned long cur_balloon = 0;
+int err = 0;
+
+err = qemuDomainGetBalloonMemory(driver, dom, cur_balloon);
+if (err)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.current,
+cur_balloon)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.maximum,
+dom-def-mem.max_balloon)  0)
+return -1;
+
+return 0;
+}
 
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
@@ -17375,6 +17406,7 @@ struct qemuDomainGetStatsWorker {
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
+{ qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 11/11] qemu: bulk stats: add block allocation information

2014-09-02 Thread Francesco Romani
Management software, want to be able to allocate disk space on demand.
To support this, they need keep track of the space occupation
of the block device.
This information is reported by qemu as part of block stats.

This patch extend the block information in the bulk stats with
the allocation information, in order to save a call to the QEMU
monitor.
---
 src/libvirt.c  |  1 +
 src/qemu/qemu_driver.c | 15 +++
 2 files changed, 16 insertions(+)

diff --git a/src/libvirt.c b/src/libvirt.c
index cabfb91..81d71be 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21591,6 +21591,7 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
  * block.num.wr.times - total time (ns) spent on writes.
  * block.num.fl.reqs - total flush requests
  * block.num.fl.times - total time (ns) spent on cache flushing
+ * block.num.allocation - offset of the highest written sector.
  *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 977e8c7..3fb54db 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17554,6 +17554,18 @@ do { \
 goto cleanup; \
 } while (0)
 
+#define QEMU_ADD_BLOCK_PARAM_ULL(RECORD, MAXPARAMS, NUM, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, block.%lu.%s, NUM, NAME); \
+if (virTypedParamsAddULLong(RECORD-params, \
+RECORD-nparams, \
+MAXPARAMS, \
+param_name, \
+VALUE)  0) \
+goto cleanup; \
+} while (0)
+
 static int
 qemuDomainGetStatsBlock(virConnectPtr conn ATTRIBUTE_UNUSED,
 virDomainObjPtr dom,
@@ -17595,6 +17607,9 @@ qemuDomainGetStatsBlock(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 fl.reqs, stats[i].flush_req);
 QEMU_ADD_BLOCK_PARAM_LL(record, maxparams, i,
 fl.times, stats[i].flush_total_times);
+
+QEMU_ADD_BLOCK_PARAM_ULL(record, maxparams, i,
+ allocation, stats[i].wr_highest_offset);
 }
 
 ret = 0;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCHv2 03/11] qemu: add helper to get the block stats

2014-09-02 Thread Francesco Romani
Add an helper function to get the block stats
of a disk.
This helper is meant to be used by the bulk stats API.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c   |  41 +++
 src/qemu/qemu_monitor.c  |  23 +
 src/qemu/qemu_monitor.h  |  18 +++
 src/qemu/qemu_monitor_json.c | 118 +--
 src/qemu/qemu_monitor_json.h |   4 ++
 5 files changed, 165 insertions(+), 39 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 1842e60..39e2c1b 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -178,6 +178,12 @@ static int qemuDomainHelperGetVcpus(virDomainObjPtr vm,
 unsigned char *cpumaps,
 int maplen);
 
+static int qemuDomainGetBlockStats(virQEMUDriverPtr driver,
+   virDomainObjPtr vm,
+   struct qemuBlockStats *stats,
+   int nstats);
+
+
 virQEMUDriverPtr qemu_driver = NULL;
 
 
@@ -9672,6 +9678,41 @@ qemuDomainBlockStats(virDomainPtr dom,
 return ret;
 }
 
+
+/*
+ * returns at most the first `nstats' stats, then stops.
+ * Returns the number of stats filled.
+ */
+static int
+qemuDomainGetBlockStats(virQEMUDriverPtr driver,
+virDomainObjPtr vm,
+struct qemuBlockStats *stats,
+int nstats)
+{
+int ret = -1;
+qemuDomainObjPrivatePtr priv;
+
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
+goto cleanup;
+
+priv = vm-privateData;
+
+qemuDomainObjEnterMonitor(driver, vm);
+
+ret = qemuMonitorGetAllBlockStatsInfo(priv-mon, NULL,
+  stats, nstats);
+
+qemuDomainObjExitMonitor(driver, vm);
+
+if (!qemuDomainObjEndJob(driver, vm))
+vm = NULL;
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
 static int
 qemuDomainBlockStatsFlags(virDomainPtr dom,
   const char *path,
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 5b2952a..8aadba5 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -1754,6 +1754,29 @@ int qemuMonitorGetBlockStatsInfo(qemuMonitorPtr mon,
 return ret;
 }
 
+int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
+const char *dev_name,
+struct qemuBlockStats *stats,
+int nstats)
+{
+int ret;
+VIR_DEBUG(mon=%p dev=%s, mon, dev_name);
+
+if (!mon) {
+virReportError(VIR_ERR_INVALID_ARG, %s,
+   _(monitor must not be NULL));
+return -1;
+}
+
+if (mon-json)
+ret = qemuMonitorJSONGetAllBlockStatsInfo(mon, dev_name,
+  stats, nstats);
+else
+ret = -1; /* not supported */
+
+return ret;
+}
+
 /* Return 0 and update @nparams with the number of block stats
  * QEMU supports if success. Return -1 if failure.
  */
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 4fd6f01..1b7d00b 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -346,6 +346,24 @@ int qemuMonitorGetBlockStatsInfo(qemuMonitorPtr mon,
  long long *flush_req,
  long long *flush_total_times,
  long long *errs);
+
+struct qemuBlockStats {
+long long rd_req;
+long long rd_bytes;
+long long wr_req;
+long long wr_bytes;
+long long rd_total_times;
+long long wr_total_times;
+long long flush_req;
+long long flush_total_times;
+long long errs; /* meaningless for QEMU */
+};
+
+int qemuMonitorGetAllBlockStatsInfo(qemuMonitorPtr mon,
+const char *dev_name,
+struct qemuBlockStats *stats,
+int nstats);
+
 int qemuMonitorGetBlockStatsParamsNumber(qemuMonitorPtr mon,
  int *nparams);
 
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 62e7d5d..68c5cf8 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1712,13 +1712,9 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
  long long *flush_total_times,
  long long *errs)
 {
-int ret;
-size_t i;
-bool found = false;
-virJSONValuePtr cmd = qemuMonitorJSONMakeCommand(query-blockstats,
- NULL);
-virJSONValuePtr reply = NULL;
-virJSONValuePtr devices;
+struct qemuBlockStats stats;
+int nstats = 1;
+int ret = -1;
 
 *rd_req = *rd_bytes = -1;
 *wr_req = *wr_bytes = *errs = -1

Re: [libvirt] [PATCH 10/11] qemu: bulk stats: implement block group

2014-09-01 Thread Francesco Romani


- Original Message -
 From: Li Wei l...@cn.fujitsu.com
 To: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Monday, September 1, 2014 7:32:37 AM
 Subject: Re: [libvirt] [PATCH 10/11] qemu: bulk stats: implement block group
 
 Hi Francesco,
 
 I notice your patchset is much complete than mine which only focus on
 VIR_DOMAIN_STATS_BLOCK[1], but it seems your patch implement block stats
 query in a per-block style, this should be a bottleneck when there are
 a lot of block devices in a domain.
 
 Could you implement it in a bulk style? so we just need only one qmp-command
 for each domain.
 
 [1]: https://www.redhat.com/archives/libvir-list/2014-August/msg01497.html


Hi Li Wei,

Thanks for pointing this out. Performance is surely a major concern of mine,
then I'll improve my patch in the direction you outlined.

I read your patch (actually I like some parts of your patch more than mine!)
but I must somehow missed that.

I'll wait for more reviews before to resubmit.

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 01/11] qemu: extract helper to get the current balloon

2014-08-29 Thread Francesco Romani
Refactor the code to extract an helper method
to get the current balloon settings.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 98 ++
 1 file changed, 60 insertions(+), 38 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 239a300..bbd16ed 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -168,6 +168,9 @@ static int qemuOpenFileAs(uid_t fallback_uid, gid_t 
fallback_gid,
   const char *path, int oflags,
   bool *needUnlink, bool *bypassSecurityDriver);
 
+static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  unsigned long *memory);
 
 virQEMUDriverPtr qemu_driver = NULL;
 
@@ -2519,6 +2522,60 @@ static int qemuDomainSendKey(virDomainPtr domain,
 return ret;
 }
 
+static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  unsigned long *memory)
+{
+int ret = -1;
+int err = 0;
+qemuDomainObjPrivatePtr priv = vm-privateData;
+
+if ((vm-def-memballoon != NULL) 
+(vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
+*memory = vm-def-mem.max_balloon;
+} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
+*memory = vm-def-mem.cur_balloon;
+} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
+unsigned long long balloon;
+
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
+goto cleanup;
+if (!virDomainObjIsActive(vm))
+err = 0;
+else {
+qemuDomainObjEnterMonitor(driver, vm);
+err = qemuMonitorGetBalloonInfo(priv-mon, balloon);
+qemuDomainObjExitMonitor(driver, vm);
+}
+if (!qemuDomainObjEndJob(driver, vm)) {
+vm = NULL;
+goto cleanup;
+}
+
+if (err  0) {
+/* We couldn't get current memory allocation but that's not
+ * a show stopper; we wouldn't get it if there was a job
+ * active either
+ */
+*memory = vm-def-mem.cur_balloon;
+} else if (err == 0) {
+/* Balloon not supported, so maxmem is always the allocation */
+*memory = vm-def-mem.max_balloon;
+} else {
+*memory = balloon;
+}
+} else {
+*memory = vm-def-mem.cur_balloon;
+}
+
+ret = 0;
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
 static int qemuDomainGetInfo(virDomainPtr dom,
  virDomainInfoPtr info)
 {
@@ -2526,7 +2583,6 @@ static int qemuDomainGetInfo(virDomainPtr dom,
 virDomainObjPtr vm;
 int ret = -1;
 int err;
-unsigned long long balloon;
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 goto cleanup;
@@ -2549,43 +2605,9 @@ static int qemuDomainGetInfo(virDomainPtr dom,
 info-maxMem = vm-def-mem.max_balloon;
 
 if (virDomainObjIsActive(vm)) {
-qemuDomainObjPrivatePtr priv = vm-privateData;
-
-if ((vm-def-memballoon != NULL) 
-(vm-def-memballoon-model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
-info-memory = vm-def-mem.max_balloon;
-} else if (virQEMUCapsGet(priv-qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
-info-memory = vm-def-mem.cur_balloon;
-} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
-if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
-goto cleanup;
-if (!virDomainObjIsActive(vm))
-err = 0;
-else {
-qemuDomainObjEnterMonitor(driver, vm);
-err = qemuMonitorGetBalloonInfo(priv-mon, balloon);
-qemuDomainObjExitMonitor(driver, vm);
-}
-if (!qemuDomainObjEndJob(driver, vm)) {
-vm = NULL;
-goto cleanup;
-}
-
-if (err  0) {
-/* We couldn't get current memory allocation but that's not
- * a show stopper; we wouldn't get it if there was a job
- * active either
- */
-info-memory = vm-def-mem.cur_balloon;
-} else if (err == 0) {
-/* Balloon not supported, so maxmem is always the allocation */
-info-memory = vm-def-mem.max_balloon;
-} else {
-info-memory = balloon;
-}
-} else {
-info-memory = vm-def-mem.cur_balloon;
-}
+err = qemuDomainGetBalloonMemory(driver, vm, info-memory);
+if (err)
+return err;
 } else {
 info-memory = 0;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com

[libvirt] [PATCH 00/11] bulk stats: QEMU implementation

2014-08-29 Thread Francesco Romani
This patchset enhances the QEMU support for the new bulk stats API.
What is added is the equivalent of these APIs:

virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus

This subset of API is the one oVirt relies on.

The patchset is organized as follows:
- the first 4 patches do refactoring to extract internal helper
  functions to be used by the old API and by the new bulk one.
  For block stats on helper is actually added instead of extracted.
- since some groups require access to the QEMU monitor, one patch
  extend the internal interface to easily accomodate that
- finally, the last six patches implement the support for the
  bulk API.

Francesco Romani (11):
  qemu: extract helper to get the current balloon
  qemu: extract helper to gather vcpu data
  qemu: add helper to get the block stats
  qemu: extract helper to get block info
  qemu: bulk stats: pass connection to workers
  qemu: bulk stats: implement CPU stats group
  qemu: bulk stats: implement balloon group
  qemu: bulk stats: implement VCPU group
  qemu: bulk stats: implement interface group
  qemu: bulk stats: implement block group
  qemu: bulk stats: implement blockinfo group

 include/libvirt/libvirt.h.in |   6 +
 src/qemu/qemu_driver.c   | 558 ++-
 2 files changed, 502 insertions(+), 62 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 02/11] qemu: extract helper to gather vcpu data

2014-08-29 Thread Francesco Romani
Extracts an helper to gether the VCpu
information.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 29 +
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index bbd16ed..1842e60 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -172,6 +172,12 @@ static int qemuDomainGetBalloonMemory(virQEMUDriverPtr 
driver,
   virDomainObjPtr vm,
   unsigned long *memory);
 
+static int qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+virVcpuInfoPtr info,
+int maxinfo,
+unsigned char *cpumaps,
+int maplen);
+
 virQEMUDriverPtr qemu_driver = NULL;
 
 
@@ -4974,10 +4980,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
int maplen)
 {
 virDomainObjPtr vm;
-size_t i;
-int v, maxcpu, hostcpus;
 int ret = -1;
-qemuDomainObjPrivatePtr priv;
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 goto cleanup;
@@ -4992,7 +4995,25 @@ qemuDomainGetVcpus(virDomainPtr dom,
 goto cleanup;
 }
 
-priv = vm-privateData;
+ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen);
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
+static int
+qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+ virVcpuInfoPtr info,
+ int maxinfo,
+ unsigned char *cpumaps,
+ int maplen)
+{
+int ret = -1;
+int v, maxcpu, hostcpus;
+size_t i;
+qemuDomainObjPrivatePtr priv = vm-privateData;
 
 if ((hostcpus = nodeGetCPUCount())  0)
 goto cleanup;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 04/11] qemu: extract helper to get block info

2014-08-29 Thread Francesco Romani
Extract qemuDiskGetBlockInfo helper.
This way, the very same code will be used both
by existing qemuDomainGetBlockInfo API and by
the new bulk stats API.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 54 ++
 1 file changed, 37 insertions(+), 17 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e7dd5ed..ee0a576 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -195,6 +195,12 @@ static int qemuDiskGetBlockStats(virQEMUDriverPtr driver,
  virDomainDiskDefPtr disk,
  struct qemuBlockStats *stats);
 
+static int qemuDiskGetBlockInfo(virQEMUDriverPtr driver,
+virDomainObjPtr vm,
+virDomainDiskDefPtr disk,
+const char *path,
+virDomainBlockInfoPtr info);
+
 
 virQEMUDriverPtr qemu_driver = NULL;
 
@@ -10451,29 +10457,16 @@ qemuDomainGetBlockInfo(virDomainPtr dom,
virDomainBlockInfoPtr info,
unsigned int flags)
 {
-virQEMUDriverPtr driver = dom-conn-privateData;
-virDomainObjPtr vm;
-int ret = -1;
-int fd = -1;
-off_t end;
-virStorageSourcePtr meta = NULL;
-virDomainDiskDefPtr disk = NULL;
-struct stat sb;
 int idx;
-int format;
-int activeFail = false;
-virQEMUDriverConfigPtr cfg = NULL;
-char *alias = NULL;
-char *buf = NULL;
-ssize_t len;
+int ret = -1;
+virQEMUDriverPtr driver = dom-conn-privateData;
+virDomainObjPtr vm = NULL;
 
 virCheckFlags(0, -1);
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 return -1;
 
-cfg = virQEMUDriverGetConfig(driver);
-
 if (virDomainGetBlockInfoEnsureACL(dom-conn, vm-def)  0)
 goto cleanup;
 
@@ -10489,7 +10482,34 @@ qemuDomainGetBlockInfo(virDomainPtr dom,
 goto cleanup;
 }
 
-disk = vm-def-disks[idx];
+ret = qemuDiskGetBlockInfo(driver, vm, vm-def-disks[idx], path, info);
+
+ cleanup:
+virObjectUnlock(vm);
+return ret;
+}
+
+
+static int
+qemuDiskGetBlockInfo(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ virDomainDiskDefPtr disk,
+ const char *path,
+ virDomainBlockInfoPtr info)
+{
+int ret = -1;
+int fd = -1;
+off_t end;
+virQEMUDriverConfigPtr cfg = NULL;
+virStorageSourcePtr meta = NULL;
+struct stat sb;
+int format;
+int activeFail = false;
+char *alias = NULL;
+char *buf = NULL;
+ssize_t len;
+
+cfg = virQEMUDriverGetConfig(driver);
 
 if (virStorageSourceIsLocalStorage(disk-src)) {
 if (!disk-src-path) {
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 05/11] qemu: bulk stats: pass connection to workers

2014-08-29 Thread Francesco Romani
Future patches which will implement more
bulk stats groups for QEMU will need to access
the connection object, so enrich the worker
prototype.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ee0a576..d4eda06 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17320,7 +17320,8 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
 
 
 static int
-qemuDomainGetStatsState(virDomainObjPtr dom,
+qemuDomainGetStatsState(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
 virDomainStatsRecordPtr record,
 int *maxparams,
 unsigned int privflags ATTRIBUTE_UNUSED)
@@ -17342,9 +17343,9 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 return 0;
 }
 
-
 typedef int
-(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+(*qemuDomainGetStatsFunc)(virConnectPtr conn,
+  virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
   int *maxparams,
   unsigned int flags);
@@ -17405,7 +17406,7 @@ qemuDomainGetStats(virConnectPtr conn,
 
 for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
 if (stats  qemuDomainGetStatsWorkers[i].stats) {
-if (qemuDomainGetStatsWorkers[i].func(dom, tmp, maxparams,
+if (qemuDomainGetStatsWorkers[i].func(conn, dom, tmp, maxparams,
   flags)  0)
 goto cleanup;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 03/11] qemu: add helper to get the block stats

2014-08-29 Thread Francesco Romani
Add an helper function to get the block stats
of a disk.
This helper is meant to be used by the bulk stats API;
future patches may want to refactor qemuDomainGetBlock*
to make use of this function as well.

Signed-off-by: Francesco Romani from...@redhat.com
---
 src/qemu/qemu_driver.c | 59 ++
 1 file changed, 59 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 1842e60..e7dd5ed 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -136,6 +136,18 @@ VIR_LOG_INIT(qemu.qemu_driver);
 
 #define QEMU_NB_BANDWIDTH_PARAM 6
 
+struct qemuBlockStats {
+long long rd_req;
+long long rd_bytes;
+long long wr_req;
+long long wr_bytes;
+long long rd_total_times;
+long long wr_total_times;
+long long flush_req;
+long long flush_total_times;
+long long errs; /* meaning less for QEMU */
+};
+
 static void processWatchdogEvent(virQEMUDriverPtr driver,
  virDomainObjPtr vm,
  int action);
@@ -178,6 +190,12 @@ static int qemuDomainHelperGetVcpus(virDomainObjPtr vm,
 unsigned char *cpumaps,
 int maplen);
 
+static int qemuDiskGetBlockStats(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ virDomainDiskDefPtr disk,
+ struct qemuBlockStats *stats);
+
+
 virQEMUDriverPtr qemu_driver = NULL;
 
 
@@ -9672,6 +9690,47 @@ qemuDomainBlockStats(virDomainPtr dom,
 return ret;
 }
 
+
+
+static int
+qemuDiskGetBlockStats(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  virDomainDiskDefPtr disk,
+  struct qemuBlockStats *stats)
+{
+int ret = -1;
+qemuDomainObjPrivatePtr priv;
+
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY)  0)
+goto cleanup;
+
+priv = vm-privateData;
+
+qemuDomainObjEnterMonitor(driver, vm);
+
+ret = qemuMonitorGetBlockStatsInfo(priv-mon,
+   disk-info.alias,
+   stats-rd_req,
+   stats-rd_bytes,
+   stats-rd_total_times,
+   stats-wr_req,
+   stats-wr_bytes,
+   stats-wr_total_times,
+   stats-flush_req,
+   stats-flush_total_times,
+   stats-errs);
+
+qemuDomainObjExitMonitor(driver, vm);
+
+if (!qemuDomainObjEndJob(driver, vm))
+vm = NULL;
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
 static int
 qemuDomainBlockStatsFlags(virDomainPtr dom,
   const char *path,
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 08/11] qemu: bulk stats: implement VCPU group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_VCPU
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 72 
 2 files changed, 73 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 78eb9b8..86ef18b 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2513,6 +2513,7 @@ typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
+VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 9825f61..527a6b4 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17429,6 +17429,77 @@ qemuDomainGetStatsBalloon(virConnectPtr conn,
 return 0;
 }
 
+
+static int
+qemuDomainGetStatsVcpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+   virDomainObjPtr dom,
+   virDomainStatsRecordPtr record,
+   int *maxparams,
+   unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+int ret = -1;
+char param_name[NAME_MAX];
+virVcpuInfoPtr cpuinfo = NULL;
+
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ vcpu.current,
+ dom-def-vcpus)  0)
+return -1;
+
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ vcpu.maximum,
+ dom-def-maxvcpus)  0)
+return -1;
+
+if (VIR_ALLOC_N(cpuinfo, dom-def-vcpus)  0)
+return -1;
+
+if ((ret = qemuDomainHelperGetVcpus(dom,
+cpuinfo,
+dom-def-vcpus,
+NULL,
+0))  0)
+goto cleanup;
+
+for (i = 0; i  dom-def-vcpus; i++) {
+snprintf(param_name, NAME_MAX, vcpu.%u.state, cpuinfo[i].number);
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].state)  0)
+goto cleanup;
+
+snprintf(param_name, NAME_MAX, vcpu.%u.time, cpuinfo[i].number);
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+param_name,
+cpuinfo[i].cpuTime)  0)
+goto cleanup;
+
+snprintf(param_name, NAME_MAX, vcpu.%u.cpu, cpuinfo[i].number);
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].cpu)  0)
+goto cleanup;
+}
+
+ret = 0;
+
+ cleanup:
+VIR_FREE(cpuinfo);
+return ret;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17445,6 +17516,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
+{ qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 07/11] qemu: bulk stats: implement balloon group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BALLOON
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 32 
 2 files changed, 33 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 992b124..78eb9b8 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2512,6 +2512,7 @@ struct _virDomainStatsRecord {
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
+VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7ffd052..9825f61 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17397,6 +17397,37 @@ qemuDomainGetStatsCpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+static int
+qemuDomainGetStatsBalloon(virConnectPtr conn,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn-privateData;
+unsigned long cur_balloon = 0;
+int err = 0;
+
+err = qemuDomainGetBalloonMemory(driver, dom, cur_balloon);
+if (err)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.current,
+cur_balloon)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+balloon.maximum,
+dom-def-mem.max_balloon)  0)
+return -1;
+
+return 0;
+}
 
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
@@ -17413,6 +17444,7 @@ struct qemuDomainGetStatsWorker {
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
+{ qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 06/11] qemu: bulk stats: implement CPU stats group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 56 
 2 files changed, 57 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 9358314..992b124 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1  0), /* return domain state */
+VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d4eda06..7ffd052 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -96,6 +96,7 @@
 #include storage/storage_driver.h
 #include virhostdev.h
 #include domain_capabilities.h
+#include vircgroup.h
 
 #define VIR_FROM_THIS VIR_FROM_QEMU
 
@@ -17343,6 +17344,60 @@ qemuDomainGetStatsState(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+
+static int
+qemuDomainGetStatsCpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom-privateData;
+unsigned long long cpu_time = 0;
+unsigned long long user_time = 0;
+unsigned long long sys_time = 0;
+int ncpus = 0;
+
+ncpus = nodeGetCPUCount();
+
+if (virTypedParamsAddInt(record-params,
+ record-nparams,
+ maxparams,
+ cpu.count,
+ ncpus)  0)
+return -1;
+
+if (virCgroupGetCpuacctUsage(priv-cgroup, cpu_time)  0)
+return -1;
+
+if (virCgroupGetCpuacctStat(priv-cgroup, user_time, sys_time)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.time,
+cpu_time)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.user,
+user_time)  0)
+return -1;
+
+if (virTypedParamsAddULLong(record-params,
+record-nparams,
+maxparams,
+cpu.system,
+sys_time)  0)
+return -1;
+
+return 0;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17357,6 +17412,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
+{ qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 10/11] qemu: bulk stats: implement block group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 54 
 2 files changed, 55 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 8c15583..372e098 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2515,6 +2515,7 @@ typedef enum {
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
+VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 818fcbc..344b02e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17552,6 +17552,59 @@ qemuDomainGetStatsInterface(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_NET_PARAM
 
+#define QEMU_ADD_BLOCK_PARAM(RECORD, MAXPARAMS, BLOCK, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, block.%s.%s, BLOCK, NAME); \
+if (virTypedParamsAddLLong(RECORD-params, \
+   RECORD-nparams, \
+   MAXPARAMS, \
+   param_name, \
+   VALUE)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlock(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn-privateData;
+struct qemuBlockStats stats;
+size_t i;
+
+for (i = 0; i  dom-def-ndisks; i++) {
+memset(stats, 0, sizeof(stats));
+
+if (qemuDiskGetBlockStats(driver, dom, dom-def-disks[i], stats)  0)
+continue;
+
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ rd.reqs, stats.rd_req);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ rd.bytes, stats.rd_bytes);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ rd.times, stats.rd_total_times);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ wr.reqs, stats.wr_req);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ wr.bytes, stats.wr_bytes);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ wr.times, stats.wr_total_times);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ fl.reqs, stats.flush_req);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ fl.times, stats.flush_total_times);
+}
+
+return 0;
+}
+
+#undef QEMU_ADD_BLOCK_PARAM
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17570,6 +17623,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
+{ qemuDomainGetStatsBlock, VIR_DOMAIN_STATS_BLOCK },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 09/11] qemu: bulk stats: implement interface group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_INTERFACE
group of statistics.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 53 
 2 files changed, 54 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 86ef18b..8c15583 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2514,6 +2514,7 @@ typedef enum {
 VIR_DOMAIN_STATS_CPU_TOTAL = (1  1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1  2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
+VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 527a6b4..818fcbc 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17500,6 +17500,58 @@ qemuDomainGetStatsVcpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 }
 
 
+#define QEMU_ADD_NET_PARAM(RECORD, MAXPARAMS, IFNAME, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, net.%s.%s, IFNAME, NAME); \
+if (virTypedParamsAddLLong(RECORD-params, \
+   RECORD-nparams, \
+   MAXPARAMS, \
+   param_name, \
+   VALUE)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsInterface(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+struct _virDomainInterfaceStats tmp;
+
+/* Check the path is one of the domain's network interfaces. */
+for (i = 0; i  dom-def-nnets; i++) {
+memset(tmp, 0, sizeof(tmp));
+
+if (virNetInterfaceStats(dom-def-nets[i]-ifname, tmp)  0)
+continue;
+
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   rx.bytes, tmp.rx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   rx.pkts, tmp.rx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   rx.errs, tmp.rx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   rx.drop, tmp.rx_drop);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   tx.bytes, tmp.tx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   tx.pkts, tmp.tx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   tx.errs, tmp.tx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, dom-def-nets[i]-ifname,
+   tx.drop, tmp.tx_drop);
+}
+
+return 0;
+}
+
+#undef QEMU_ADD_NET_PARAM
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17517,6 +17569,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
+{ qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 11/11] qemu: bulk stats: implement blockinfo group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK_INFO
group of statistics.

This is different from the VIR_DOMAIN_STATS_BLOCK
group because represents the information about the
block device.
Most notably, this group export the allocation information
which is used by monitoring applications to detect
the space exaustion on the block devices.

Signed-off-by: Francesco Romani from...@redhat.com
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 44 
 2 files changed, 45 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 372e098..c0b695d 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2516,6 +2516,7 @@ typedef enum {
 VIR_DOMAIN_STATS_VCPU = (1  3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1  4), /* return domain interfaces info */
 VIR_DOMAIN_STATS_BLOCK = (1  5), /* return domain block info */
+VIR_DOMAIN_STATS_BLOCK_INFO = (1  6), /* return domain block layout */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 344b02e..564f1e1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17604,6 +17604,49 @@ qemuDomainGetStatsBlock(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_BLOCK_PARAM
 
+#define QEMU_ADD_BLOCK_INFO_PARAM(RECORD, MAXPARAMS, BLOCK, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, block.%s.%s, BLOCK, NAME); \
+if (virTypedParamsAddULLong(RECORD-params, \
+RECORD-nparams, \
+MAXPARAMS, \
+param_name, \
+VALUE)  0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlockInfo(virConnectPtr conn,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn-privateData;
+virDomainBlockInfo info;
+size_t i;
+
+for (i = 0; i  dom-def-ndisks; i++) {
+memset(info, 0, sizeof(info));
+
+if (qemuDiskGetBlockInfo(driver, dom, dom-def-disks[i],
+ dom-def-disks[i]-dst, info)  0)
+continue;
+
+QEMU_ADD_BLOCK_INFO_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ capacity, info.capacity);
+QEMU_ADD_BLOCK_INFO_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ allocation, info.allocation);
+QEMU_ADD_BLOCK_INFO_PARAM(record, maxparams, dom-def-disks[i]-dst,
+ physical, info.physical);
+}
+
+return 0;
+}
+
+#undef QEMU_ADD_BLOCK_PARAM
+
 
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
@@ -17624,6 +17667,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
 { qemuDomainGetStatsBlock, VIR_DOMAIN_STATS_BLOCK },
+{ qemuDomainGetStatsBlockInfo, VIR_DOMAIN_STATS_BLOCK_INFO },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFCv2] Introduce API for retrieving bulk domain stats v2

2014-08-25 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: libvir-list@redhat.com
 Cc: ebl...@redhat.com, berra...@redhat.com, from...@redhat.com, Peter 
 Krempa pkre...@redhat.com
 Sent: Thursday, August 21, 2014 3:20:45 PM
 Subject: [RFCv2] Introduce API for retrieving bulk domain stats v2
 
 I'd like to propose a (hopefully) fairly future-proof API to retrieve
 various statistics for domains.
 
 The motivation is that management layers that use libvirt usually poll
 libvirt for statistics using various split up APIs we currently provide.
 To get all the necessary stuff, the mgmt app need to issue Ndomains *
 Napis calls and cope with the various returned formats. The APIs I'm
 wanting to introduce here will:
 
 1) Return data in a format that we can expand in the future and is
 hierarchical. This version returns the data as typed parameters where
 the fields are constructed as dot-separated strings containing names and
 other stuff in a list of typed params.
 
 2) Stats for multiple (all) domains can be queried at once and are
 returned in one call. This will allow to decrease the overhead necessary
 to issue multiple calls per domain multiplied by the count of domains.
 
 3) Selectable (bit mask) fields in the returned format. This will allow
 to retrieve only specific stats according to the APPs need.
 
 Initially the implementation will introduce the option to retrieve
 block, interface  and cpu stats with the possibility to add more in the
 future.
 
 The stats groups will be enabled using a bit field @stats passed as the
 function argument. A few groups for inspiration:
 
 VIR_DOMAIN_STATS_STATE
 VIR_DOMAIN_STATS_CPU
 VIR_DOMAIN_STATS_BLOCK
 VIR_DOMAIN_STATS_INTERFACE
 
 the returned typed params will use the following scheme
 
 state.state = running
 state.reason = started
 cpu.count = 8
 cpu.0.state = running
 cpu.0.time = 1234

OK for me

 +typedef struct _virDomainStatsRecord virDomainStatsRecord;
 +typedef virDomainStatsRecord *virDomainStatsRecordPtr;
 +struct _virDomainStatsRecord {
 +virDomainPtr dom;
 +unsigned int nparams;
 +virTypedParameterPtr params;
 +};
 +
 +typedef enum {
 +VIR_DOMAIN_STATS_ALL = (1  0), /* return all stats fields
 +   implemented in the daemon */
 +VIR_DOMAIN_STATS_STATE = (1  1), /* return domain state */
 +} virDomainStatsTypes;
 +
 +int virConnectGetAllDomainStats(virConnectPtr conn,
 +unsigned int stats,
 +virDomainStatsRecordPtr **retStats,
 +unsigned int flags);
 +
 +int virDomainListGetStats(virDomainPtr *doms,
 +  unsigned int stats,
 +  virDomainStatsRecordPtr **retStats,
 +  unsigned int flags);
 +
 +void virDomainStatsRecordListFree(virDomainStatsRecordPtr *stats);
 +

Minor question:
Would it be possible, maybe on a future extension, for the caller to
preallocate the virDomainStatsPtr output records?

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC] Introduce API for retrieving bulk domain stats

2014-08-19 Thread Francesco Romani
- Original Message -
 From: Peter Krempa pkre...@redhat.com
 To: libvir-list@redhat.com
 Cc: Peter Krempa pkre...@redhat.com
 Sent: Tuesday, August 19, 2014 3:14:19 PM
 Subject: [libvirt] [RFC] Introduce API for retrieving bulk domain stats
 
 I'd like to propose a (hopefully) fairly future-proof API to retrieve
 various statistics for domains.

Hi,

Speaking for VDSM/oVirt, the proposal looks really nice and serves well our 
needs.
Some specific points
 
 The motivation is that management layers that use libvirt usually poll
 libvirt for statistics using various split up APIs we currently provide.
 To get all the necessary stuff, the mgmt app need to issue Ndomains *
 Napis calls and cope with the various returned formats. The APIs I'm
 wanting to introduce here will:
 
 1) Return data in a format that we can expand in the future and is
 hierarchical. For starters I'll use XML, with possible expansion to
 something like JSON if it will be favourable for a consumer (switchable
 by a flag)

awesome

 2) Stats for multiple (all) domains can be queried at once and are
 returned in one call. This will allow to decrease the overhead necessary
 to issue multiple calls per domain multiplied by the count of domains.

We had (and still have) a lot of pain from a specific scenario on which
a VM becomes unresponsive, 99% of time because QEMU gets stuck, likely
on I/O (please remember that oVirt supports more storage types than just NFS,
like ISCSI to say the least, so soft mount is not always the solution...).

We then need a timeout or a way to signal that some VMs are not responding.

Moreover, if we have N VMs and M not responding (being of course M = N),
would be cool to have a timeout *not* proportional to M... We'd like to avoid
to wait M * timeout seconds before to know that some of them are failing :)

Most importantly, the call should somehow report *all* the failed VMs.

Let me try to summarize. Let's say we have 10 VMs (0-9), of which VMs 3,4,7,9 
are
failing (N=10, M=4). We'd like to wait less than M=4*timeout seconds and, maybe
most importantly, we'll need to know that all of the above have failed, not
just the one (maybe the first).

The reason is our management app, VDSM, needs to report all the not responding 
VMs.

Maybe an entry into the XML data for a not responding VM would be OK

 3) Selectable (bit mask) fields in the returned format. This will allow
 to retrieve only specific stats according to the APPs need.

awesome as well

[...]
 Initially the implementation will introduce the option to retrieve
 block, interface  and cpu stats with the possibility to add more in the
 future.

I filed a list of APIs relevant for VDSM here:
https://bugzilla.redhat.com/show_bug.cgi?id=1113116#c1

Turns out that the list could be narrowed down to

virDomainBlockInfo - for highest sector of a block
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags

(will updated the BZ soon)

 As this is a first draft and dump of my mind on this subject it may be
 a bit rough, so suggestions are welcome.
 
 Thanks for looking.

Thanks for the proposal :) I think is a great step forward

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC][scale] new API for querying domains stats

2014-08-12 Thread Francesco Romani
- Original Message -
 From: Richard W.M. Jones rjo...@redhat.com
 To: Li Wei l...@cn.fujitsu.com
 Cc: Francesco Romani from...@redhat.com, libvir-list@redhat.com
 Sent: Tuesday, August 12, 2014 11:04:05 AM
 Subject: Re: [libvirt] [RFC][scale] new API for querying domains stats
 

[...]
   Is it possible to design an API that can work across all domains
   in a single call?
  
  How about the following API:
  
  int virConnectGetAllBlockStats(virConnectPtr conn,
  virDomainPtr domain,
  virDomainBlockBulkStatsPtr *stats,
  unsigned int flags);
  @conn: pointer to libvirt connection
  @domain: pointer to the domain to be queried, NULL for all domains
  @stats: array of virDomainBlockBulkStats struct(see below) to be populated
  @flags: filter flags
  Return the number of virDomainBlockBulkStats populated.
  
  where virDomainBlockBulkStats defined as:
  
  struct _virDomainBlockBulkStats {
  virDomainPtr domain; /* domain the block stats belongs to */
  virTypedParameterPtr params; /* params to store block stats */
  unsigned int nparams;/* how many params used for each block stats */
  unsigned int ndisks; /* how many block stats in this domain */
  };
 
 Works for me.

Same here.

oVirt, more specifically VDSM, needs to check all the stats of all
the domains on a given host at once, so this API should fit the task.

Since VDSM takes ownership (read: keep track and control) of all the VMs,
the filtering capability of this new API should be good enough.

+++

It would be nice, but less important, to be able to somehow reuse the 
`stats' argument.

What I'm looking here is a way to avoid to allocate/deallocate every time
all the needed structure before and after each call.

I'm saying so because is a pretty common scenario for a VM (at least in
the cases I'm aware of) to have the same number of disks during all its life.

But I believe this is an optimization which can be added later.

Thanks,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC][scale] new API for querying domains stats

2014-07-09 Thread Francesco Romani


- Original Message -
 From: Francesco Romani from...@redhat.com
 To: libvir-list@redhat.com
 Sent: Friday, July 4, 2014 6:44:07 PM
 Subject: Re: [libvirt] [RFC][scale] new API for querying domains stats

   However, a question here about bulk APIs.
   One cornerstone of oVirt is shared storage (NFS, ISCSI...); another is
   qemu/kvm,
   and COW images are supported (probably even the default, need to check).
   
   Due to storage being unavailable because a network outage, it happened
   that
   virDomainGetBlockInfo blocked beyond recover.
   
   On such scenarios, how will a bulk API behave? There will be a timeout or
   something else?
  
  It depends on the storage and the way it is configured. If NFS is mounted
  with 'hard' + 'nointr' any call libvirt makes to dead storage will get
  stuck in an uninterruptable sleep in kernel space. There's no way for
  libvirt to time out since by the very definition of 'hard' mount option
  it does not time out. If you mount with 'soft' then the calls libvirt
  makes will time out.
 
 My bad, I worded poorly my question.
 
 What I mean is: on top of what the kernel or QEMU (libnfs, libiscsi) does,
 there are plans for any additional mechanism/safeguard?
 (I guess no, I'm asking just to be sure).

Hi,

maybe borderline offtopic, but still about blocking calls:

We (VDSM/oVirt developers) are reviewing our usage of libvirt in sampling.
Afer a (quick) inspection of the code, I believe the following calls cannot
block due to FS/storage issues, as they do not need it in any way

I'm quite confident about these
* virDomainGetCPUStats: uses cgroups only (no FS/storage access)
* virDomainInterfaceStats: uses /proc/net/dev  (no FS/storage access)
* virDomainGetVcpus: uses uses /proc and syscall for PCPU affinity (no 
FS/storage access)
* virDomainSchedulerParameters: which uses cgroups (no FS/storage access)

Not sure about this, but it looks to me they don't need to access FS/storage 
either:
* virDomainGetVcpusFlags
* virDomainGetMetadata


Can please anyone confirm or deny?

Thanks and best regards

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC][scale] new API for querying domains stats

2014-07-04 Thread Francesco Romani
- Original Message -
 From: Richard W.M. Jones rjo...@redhat.com
 To: Francesco Romani from...@redhat.com
 Cc: libvir-list@redhat.com
 Sent: Friday, July 4, 2014 1:11:54 PM
 Subject: Re: [libvirt] [RFC][scale] new API for querying domains stats

  Right now we aim for a number of VM per node in the (few) hundreds, but we
  have big plans
  to scale much more, and to possibly reach thousands in a not so distant
  future.
  At the moment, we use one thread per VM to gather the VM stats (CPU,
  network, disk),
  and of course this obviously scales poorly.
 
 I'll just note here that a bug has been opened for virt-top, which
 is similar to this.
 
 If a domain has a large number of disks (256 virtio-scsi disks in the
 customer's case), then virt-top spends so long fetching the data for
 each separate disk, it can take 30-40 seconds between updates.
 
 The same thing would happen if you had lots of domains, each with a
 few disks, but with the total adding up to hundreds of disks.
 
 The same thing would happen if you substitute network interfaces for disks.
 
 What would help for us:
 
  - A way to get information for multiple objects in a single domain
 
  - A way to get information for multiple objects across multiple domains
 
 in as few API round trips as possible.

I concur. Actually you also expressed our (VDSM) need better than I did.
I think we are on the same boat.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC][scale] new API for querying domains stats

2014-07-04 Thread Francesco Romani
- Original Message -
 From: Richard W.M. Jones rjo...@redhat.com
 To: Daniel P. Berrange berra...@redhat.com
 Cc: libvir-list@redhat.com, Francesco Romani from...@redhat.com
 Sent: Friday, July 4, 2014 1:39:57 PM
 Subject: Re: [libvirt] [RFC][scale] new API for querying domains stats

   What would help for us:
   
- A way to get information for multiple objects in a single domain
   
- A way to get information for multiple objects across multiple domains
  
  I'd say that we want something similar to the virDomainListAllDomains()
  API for stats. ie we shouldn't try to pass in the full list of domains
  or paths we want info for. We should just list all domains, optionally
  using flags to filter based on some characteristic, eg exclude inactive.
  Similarly always list stats for all disks.
 
 FYI for virt-top we only care about stats of all active domains, and
 we only care about all disks  all network interfaces for domains
 (ie. never any subset).
 
 We also collect CPU time and memory usage per domain.

Is the same for VDSM. VDSM takes ownership of all the domain on an host,
so all it never does any kind of filtering or consider subsets of any kind.

However, a question here about bulk APIs.
One cornerstone of oVirt is shared storage (NFS, ISCSI...); another is qemu/kvm,
and COW images are supported (probably even the default, need to check).

Due to storage being unavailable because a network outage, it happened that
virDomainGetBlockInfo blocked beyond recover.

On such scenarios, how will a bulk API behave? There will be a timeout or
something else?

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC][scale] new API for querying domains stats

2014-07-04 Thread Francesco Romani
- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: Francesco Romani from...@redhat.com
 Cc: libvir-list@redhat.com, Richard W.M. Jones rjo...@redhat.com
 Sent: Friday, July 4, 2014 6:21:30 PM
 Subject: Re: [libvirt] [RFC][scale] new API for querying domains stats

  However, a question here about bulk APIs.
  One cornerstone of oVirt is shared storage (NFS, ISCSI...); another is
  qemu/kvm,
  and COW images are supported (probably even the default, need to check).
  
  Due to storage being unavailable because a network outage, it happened that
  virDomainGetBlockInfo blocked beyond recover.
  
  On such scenarios, how will a bulk API behave? There will be a timeout or
  something else?
 
 It depends on the storage and the way it is configured. If NFS is mounted
 with 'hard' + 'nointr' any call libvirt makes to dead storage will get
 stuck in an uninterruptable sleep in kernel space. There's no way for
 libvirt to time out since by the very definition of 'hard' mount option
 it does not time out. If you mount with 'soft' then the calls libvirt
 makes will time out.

My bad, I worded poorly my question.

What I mean is: on top of what the kernel or QEMU (libnfs, libiscsi) does,
there are plans for any additional mechanism/safeguard?
(I guess no, I'm asking just to be sure).

VDSM already uses soft mount for NFS (need to check what we do for ISCSI and
the other supported storage).

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


  1   2   >