Re: [PATCH] docs: formatdomain: Document disk serial truncation status quo

2021-06-29 Thread Tomáš Golembiovský
On Tue, Jun 29, 2021 at 03:33:26PM +0200, Peter Krempa wrote:
> On Mon, Jun 28, 2021 at 16:55:34 +0200, Tomáš Golembiovský wrote:
> > Hi,
> > 
> > I have a few questions regarding this to get better understanding on how
> > this should be handled by management apps.
> > 
> > On Fri, Jun 04, 2021 at 02:08:40PM +0200, Peter Krempa wrote:
> > > Disk serials are truncated arbitrarily and silently by qemu depending on
> > > the device type and how they are configured. Since changing the current
> > > state would lead to more regressions than we have now, document that the
> > > truncation is arbitrary.
> > > 
> > > Signed-off-by: Peter Krempa 
> > > ---
> > >  docs/formatdomain.rst | 10 ++
> > >  1 file changed, 10 insertions(+)
> > > 
> > > diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
> > > index aa7bb8da14..3ee537da14 100644
> > > --- a/docs/formatdomain.rst
> > > +++ b/docs/formatdomain.rst
> > > @@ -3146,6 +3146,16 @@ paravirtualized driver is specified via the 
> > > ``disk`` element.
> > > may look like ``WD-WMAP9A966149``. Not supported for
> > > scsi-block devices, that is those using disk ``type`` 'block' using
> > > ``device`` 'lun' on ``bus`` 'scsi'. :since:`Since 0.7.1`
> > > +
> > > +   Note that depending on hypervisor and device type the serial number 
> > > may be
> > > +   truncated silently. IDE/SATA devices are commonly limited to 20 
> > > characters.
> > > +   SCSI devices depending on hypervisor version are limited to 20, 36 or 
> > > 247
> > 
> > Is this meant to say "hypervisor" or is it really "hypervisor version"?
> > This can mean a huge difference. See below.
> 
> In this case, hypervisor + version.
> 
> > > +   characters.
> > > +
> > > +   Hypervisors may also start rejecting overly long serials instead of
> > > +   truncating them in the future so it's advised to avoid the implicit
> > > +   truncation by testing the desired serial length range with the 
> > > desired device
> > > +   and hypervisor combination.
> > 
> > If hypervisor start rejecting long serial numbers than this will become
> > tricky.
> 
> It indeed will be tricky.
> 
> > Does the above mean libvirt can report the length limit? Or does
> > that mean one should first try running some VMs to test the limit, take
> > a note of the length and hardcode that? If it is the later then
> 
> For now, libvirt can't do that because qemu isn't exposing this data in
> any way, but in case it would make oVirt's life easier I think we can
> ask QEMU to add the length limit in an introspectable fashion.
> 

Ok, there's probably no need for that if we can safely assume the limit
will not become smaller in the future. I was concerned about a situation
where all VMs would suddnely start failing after QEMU upgrade.

Thanks for the info,

Tomas


> > what are the chances that the limit in hypervisor will become smaller?
> 
> Generally I'd assume it's close to 0. Decreasing the length can be
> generally considered as regression in behaviour and more importantly
> there usually aren't technical reasons to do that once it's proven to
> work ad a higher limit.
> 
> > Or is it safe to assume that the limit will only grow in future versions
> > of the hypervisor (notably QEMU).
> 
> For qemu I think it's safe to assume that it will only grow in cases
> where the technical limit of the emulated device's serial passing
> approach is higher than currently considered.
> 

-- 
Tomáš Golembiovský 



Re: [PATCH] docs: formatdomain: Document disk serial truncation status quo

2021-06-28 Thread Tomáš Golembiovský
Hi,

I have a few questions regarding this to get better understanding on how
this should be handled by management apps.

On Fri, Jun 04, 2021 at 02:08:40PM +0200, Peter Krempa wrote:
> Disk serials are truncated arbitrarily and silently by qemu depending on
> the device type and how they are configured. Since changing the current
> state would lead to more regressions than we have now, document that the
> truncation is arbitrary.
> 
> Signed-off-by: Peter Krempa 
> ---
>  docs/formatdomain.rst | 10 ++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
> index aa7bb8da14..3ee537da14 100644
> --- a/docs/formatdomain.rst
> +++ b/docs/formatdomain.rst
> @@ -3146,6 +3146,16 @@ paravirtualized driver is specified via the ``disk`` 
> element.
> may look like ``WD-WMAP9A966149``. Not supported for
> scsi-block devices, that is those using disk ``type`` 'block' using
> ``device`` 'lun' on ``bus`` 'scsi'. :since:`Since 0.7.1`
> +
> +   Note that depending on hypervisor and device type the serial number may be
> +   truncated silently. IDE/SATA devices are commonly limited to 20 
> characters.
> +   SCSI devices depending on hypervisor version are limited to 20, 36 or 247

Is this meant to say "hypervisor" or is it really "hypervisor version"?
This can mean a huge difference. See below.

> +   characters.
> +
> +   Hypervisors may also start rejecting overly long serials instead of
> +   truncating them in the future so it's advised to avoid the implicit
> +   truncation by testing the desired serial length range with the desired 
> device
> +   and hypervisor combination.

If hypervisor start rejecting long serial numbers than this will become
tricky. Does the above mean libvirt can report the length limit? Or does
that mean one should first try running some VMs to test the limit, take
a note of the length and hardcode that? If it is the later then
what are the chances that the limit in hypervisor will become smaller?
Or is it safe to assume that the limit will only grow in future versions
of the hypervisor (notably QEMU).

Thanks,

Tomas

-- 
Tomáš Golembiovský 



Re: [PATCH v2] qemu: Expose disk serial in virDomainGetGuestInfo()

2021-04-14 Thread Tomáš Golembiovský
On Wed, Apr 14, 2021 at 11:51:39AM +0200, Michal Privoznik wrote:
> When querying guest info via virDomainGetGuestInfo() the
> 'guest-get-disks' agent command is called. It may report disk
> serial number which we parse, but never report nor use for
> anything else.
> 
> As it turns out, it may help management application find matching
> disk in their internals.
> 
> Signed-off-by: Michal Privoznik 
> ---
> 
> v2 of:
> 
> https://listman.redhat.com/archives/libvir-list/2021-April/msg00552.html
> 
> diff to v1:
> - Filled virsh documentation
> - Documented that the param is optional and type of string
> 
>  docs/manpages/virsh.rst |  1 +
>  src/libvirt-domain.c|  1 +
>  src/qemu/qemu_driver.c  | 19 ++-
>  3 files changed, 16 insertions(+), 5 deletions(-)
> 

Reviewed-By: Tomáš Golembiovský 

-- 
Tomáš Golembiovský 



[libvirt PATCH] virsh: guest-agent-timeout: set default value for optional argument

2020-08-21 Thread Tomáš Golembiovský
The timeout argument for guest-agent-timeout is optional but it did not
have proper default value specified. Also update the virsh man page
accordingly.

Signed-off-by: Tomáš Golembiovský 
---
 docs/manpages/virsh.rst | 7 ---
 tools/virsh-domain.c| 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index 92de0b2192..6e48ae7973 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -2631,15 +2631,16 @@ guest
 
 .. code-block::
 
-   guest-agent-timeout domain --timeout value
+   guest-agent-timeout domain [--timeout value]
 
 Set how long to wait for a response from guest agent commands. By default,
 agent commands block forever waiting for a response. ``value`` must be a
 positive value (wait for given amount of seconds) or one of the following
 values:
 
-* -2 - block forever waiting for a result,
-* -1 - reset timeout to the default value,
+* -2 - block forever waiting for a result (used when --timeout is omitted),
+* -1 - reset timeout to the default value (currently defined as 5 seconds in
+  libvirt daemon),
 * 0 - do not wait at all,
 
 
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 286cf79671..1f3a549d9a 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -14207,7 +14207,7 @@ static bool
 cmdGuestAgentTimeout(vshControl *ctl, const vshCmd *cmd)
 {
 virDomainPtr dom = NULL;
-int timeout;
+int timeout = VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_BLOCK;
 const unsigned int flags = 0;
 bool ret = false;
 
-- 
2.25.0



[libvirt PATCH] docs: fix names of some commands

2020-08-21 Thread Tomáš Golembiovský
Some commands were improperly converted from original POD file. Their
names were stripped after first dash.

Signed-off-by: Tomáš Golembiovský 
---
 docs/manpages/virsh.rst | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index 92de0b2192..5731656b1d 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -498,8 +498,8 @@ in seconds for which the host has to be suspended, it 
should be at least
 60 seconds.
 
 
-node
-
+node-memory-tune
+
 
 **Syntax:**
 
@@ -600,8 +600,8 @@ source elements to create the pool.
 
 
 
-inject
---
+inject-nmi
+--
 
 **Syntax:**
 
@@ -2624,8 +2624,8 @@ When *--timestamp* is used, a human-readable timestamp 
will be printed
 before the event.
 
 
-guest
--
+guest-agent-timeout
+---
 
 **Syntax:**
 
@@ -7259,8 +7259,8 @@ checkpoint-parent
 Output the name of the parent checkpoint, if any, for the given
 *checkpoint*.
 
-checkpoint
---
+checkpoint-delete
+-
 
 **Syntax:**
 
-- 
2.25.0



[libvirt PATCH] docs: document proper enum for guest agent timeout

2020-05-15 Thread Tomáš Golembiovský
The documented enum and its values do not exits. The real enum has
slightly different name.

Signed-off-by: Tomáš Golembiovský 
---
 src/libvirt-domain.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index a12809c2d5..37f864b7b0 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -12550,13 +12550,13 @@ int virDomainGetLaunchSecurityInfo(virDomainPtr 
domain,
  * Set how long to wait for a response from guest agent commands. By default,
  * agent commands block forever waiting for a response.
  *
- * @timeout must be a value from virDomainAgentCommandTimeoutValues or
+ * @timeout must be a value from virDomainAgentResponseTimeoutValues or
  * positive:
  *
- *   VIR_DOMAIN_AGENT_COMMAND_TIMEOUT_BLOCK(-2): meaning to block forever
+ *   VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_BLOCK(-2): meaning to block forever
  *  waiting for a result.
- *   VIR_DOMAIN_AGENT_COMMAND_TIMEOUT_DEFAULT(-1): use default timeout value.
- *   VIR_DOMAIN_AGENT_COMMAND_TIMEOUT_NOWAIT(0): does not wait.
+ *   VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_DEFAULT(-1): use default timeout value.
+ *   VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_NOWAIT(0): does not wait.
  *   positive value: wait for @timeout seconds
  *
  * Returns 0 on success, -1 on failure
-- 
2.25.0



Re: [libvirt] [RFC] Add API to change qemu agent response timeout

2019-10-28 Thread Tomáš Golembiovský
> > > +qemuDomainObjExitAgent(vm, agent);
> > 
> > Also this API is inherently racy if you have two clients setting the
> > timeout and it will influence calls of a different client.

As pointed in other part of the thread this is not a problem, rather
it's a fact about libvirt. You assume the management app knows what it
is doing. And if there are more management apps or users meddling with
libvirt there's always the risk of stepping on each others toes.


> > 
> > IMO the only reasonable approach is to add new APIs which have a
> > 'timeout' parameter for any agent API which requires tunable timeout to
> > prevent any races and unexpected behaviour.
> > 
> > Other possibility may be to add a qemu config file option for this but
> > that is not really dynamic.
> 
> I guess the key question is whether we actually have a compelling
> use case to vary the timeout on a per-command basis.
> 
> If not, then we could do fine with a global config that is either
> recorded in the domain XML, or in the global QEMU config.
> 

What I strive for is to keep the allowed timeout to minimum. Especially
because we have to query all the machines sequentially for the lack of
threading (either in libvirt or in oVirt). One machine taking too long
to respond can delay the queries for all the other machines.

> The possible causes of slowness are
> 
>  - Host is overloaded so the guest is not being scheduled
>  - Guest is overloaded so the agent is not being scheduled

This are generally not a problem for keeping the timeout to minimum. We
can expect that the next time we issue the query all will go well. And
if guest/host is overloaded for a long term there's not much we can do
about it (waiting longer for the command to complete won't help).

>  - The agent has crash/deadlocked

This reminds me a different issue which is there's no API to find out
whether the agent is connected or not. One either has to "give it a
shot" and handle failure, or check the domain XML (which is
inefficient).


>  - The agent is maliciously not responding

This is a interesting point, but I would put it in the same group as
"guest overloaded" from our POV.

>  - The command genuinely takes a long time to perform its action

Now this is a genuine problem. Again from our POV it makes sense to keep
long-running commands separate from the group of quick commands and run
those in a separate loop with higher timeout but less often.

> 
> The first 4 of those are fine with a global timeout either on guest or
> in the driver.
> 
> Only the last one really pushes for having this per-public API.
> 
> Looking commands, ones I think can take a long time are
> 
>  - FS trim - this can run for many minutes if there's alot to trim
>  - FS freeze - this might need to wait for apps to quiesce their I/O IIUC
>  - Guest exec - it can run any command
> 

We don't really care about the timeout here. At least not at the moment.


> The latter isn't used in libvirt, can be be run via the QGA passthrough
> api in libvirt-qemu.so
> 
> So sadly I think we genuinely do have a need for per-commad timeouts,
> for at least some of the API calls. I don't think all of them need it
> though.
> 
> I though we could likely add a qemu.conf setting for the globak timeout,
> but then add a timeout to individual APIs in specific cases where we
> know they can genuinely take a very long time to execute.

I would be against any timeout that cannot be changed via API. This
would not give us the flexibility we require.

Hope this helps,

Tomas


> 
> We must also ensure we NEVER block any regular libvirt APIs when a
> guest agent comamnd is running.
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list

-- 
Tomáš Golembiovský 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/7] add virDomainGetGuestInfo()

2019-08-08 Thread Tomáš Golembiovský
On Wed, Aug 07, 2019 at 10:41:10AM -0500, Jonathon Jongsma wrote:
> On Wed, 2019-08-07 at 12:39 +0200, Tomáš Golembiovský wrote:
> > On Thu, Aug 01, 2019 at 08:37:03AM -0500, Jonathon Jongsma wrote:
> > > This series adds several bits of guest information provided by a
> > > new API
> > > function virDomainGetGuestInfo(). There is an implementation for
> > > qemu using the
> > > guest agent. In particular, it adds information about logged-in
> > > users, guest
> > > OS, and timezone.
> > > 
> > > I had previously submitted a patch series with a dedicated API
> > > function for
> > > querying guest users that returned an array of structures
> > > describing the users.
> > > It was suggested that I convert his to a more futureproof design
> > > using typed
> > > parameters and also combine it with other bits of guest information
> > > similar to
> > > virDomainListGetStats(). In fact, there were several bugs that
> > > requested this
> > > information be added to the 'bulk stats' API, but Peter Krempa
> > > suggested adding
> > > a new API for all data queried from the guest agent (see
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1705514). This is that
> > > API
> > > proposal.
> > 
> > The reason we suggested 'bulk stats' approach is so that we can
> > retrieve
> > information for all VMs in single call. This is not just about
> > laziness
> > on side of management app, it is much easier for libvirt. We can
> > either
> > fetch the info for all VMs one-by-one (which can take too long), or
> > resort to massive threading. On the other hand it seems that libvirt
> > with its async jobs can handle this in single thread. I am not
> > libvirt
> > expert so please correct me if I am making wrong assumptions here.
> > Also,
> > libvirt has pretty good knowledge whether the guest agent is running
> > or
> > not. From application side we cannot get this information reliably
> > and
> > we need to resort to trial-error approach.
> 
> It's not entirely clear to me what you are suggesting here. Are you
> saying that this API is generally OK, but that you wish it would query
> all domains at once? Or are you suggesting some additional changes?

What I am saying is that being able to query single VM is nice, but it's
just a start because we need to be able to query all the (running) VMs
at once. And having libvirt API for that would be much more benefitial
(for reasons specified) than querying VMs one-by-one in management apps.
Of course it's pretty fine to have separate (external) API calls for
single VM/list of VMs/all VMs, but how it behaves "under the hood" is
what needs to be considered (is single VM special case of all VMs or is
it the other way around?). 
 

> 
> 
> > > It follows the stats API quite closely, and tries to return data in
> > > similar ways (for example, the "users.N.*" field name scheme was
> > > inspired by
> > > various stats fields).
> > > 
> > > I plan to follow this series up with a patch that adds fsinfo to
> > > the returned
> > > information, but wanted to get comments on this approach now.
> > 
> > Apart from the above I have two other concerns.
> > 
> > With how the API call is designed you cannot pick which commands to
> > run
> > (you always get them all). With the number of included commands
> > growing
> > in the future the time to complete the call will grow as well and
> > could
> > just take too long. Also, if you run the call periodically you always
> > don't want to run all the commands. For example, you don't need to
> > check
> > the os-info as often as list of logged in users.
> 
> 
> If I understand what you're saying, I think it's incorrect. The API
> that I proposed takes a 'types' parameter (similar to the 'stats'
> parameter in the bulk stats API) which lets you specify which guest
> information types you would like to query. So if you want, you can
> query only a subset of the guest info categories.

You are right, please ignore this comment.

> 
> > 
> > You cannot set the timeout on the guest agent commands. Instead you
> > block on them. As described in bug [1], rogue guest can potentially
> > block your call indefinitely. If you plan to address this problem in
> > a
> > more general manner later that's probably fine too.
> 
> Yes, I think this issue is probably better addressed separately.

OK.

Tomas

> 
> > 
> > Tomas
> > 
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1705426
> > 
> > 
> > 
> 

-- 
Tomáš Golembiovský 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/7] add virDomainGetGuestInfo()

2019-08-07 Thread Tomáš Golembiovský
On Thu, Aug 01, 2019 at 08:37:03AM -0500, Jonathon Jongsma wrote:
> This series adds several bits of guest information provided by a new API
> function virDomainGetGuestInfo(). There is an implementation for qemu using 
> the
> guest agent. In particular, it adds information about logged-in users, guest
> OS, and timezone.
> 
> I had previously submitted a patch series with a dedicated API function for
> querying guest users that returned an array of structures describing the 
> users.
> It was suggested that I convert his to a more futureproof design using typed
> parameters and also combine it with other bits of guest information similar to
> virDomainListGetStats(). In fact, there were several bugs that requested this
> information be added to the 'bulk stats' API, but Peter Krempa suggested 
> adding
> a new API for all data queried from the guest agent (see
> https://bugzilla.redhat.com/show_bug.cgi?id=1705514). This is that API
> proposal.

The reason we suggested 'bulk stats' approach is so that we can retrieve
information for all VMs in single call. This is not just about laziness
on side of management app, it is much easier for libvirt. We can either
fetch the info for all VMs one-by-one (which can take too long), or
resort to massive threading. On the other hand it seems that libvirt
with its async jobs can handle this in single thread. I am not libvirt
expert so please correct me if I am making wrong assumptions here. Also,
libvirt has pretty good knowledge whether the guest agent is running or
not. From application side we cannot get this information reliably and
we need to resort to trial-error approach.
 

> It follows the stats API quite closely, and tries to return data in
> similar ways (for example, the "users.N.*" field name scheme was inspired by
> various stats fields).
> 
> I plan to follow this series up with a patch that adds fsinfo to the returned
> information, but wanted to get comments on this approach now.

Apart from the above I have two other concerns.

With how the API call is designed you cannot pick which commands to run
(you always get them all). With the number of included commands growing
in the future the time to complete the call will grow as well and could
just take too long. Also, if you run the call periodically you always
don't want to run all the commands. For example, you don't need to check
the os-info as often as list of logged in users.

You cannot set the timeout on the guest agent commands. Instead you
block on them. As described in bug [1], rogue guest can potentially
block your call indefinitely. If you plan to address this problem in a
more general manner later that's probably fine too.

    Tomas

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1705426



-- 
Tomáš Golembiovský 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v3 1/1] qemu: Add entry for balloon stat stat-disk-caches

2018-07-17 Thread Tomáš Golembiovský
whops, I didn't notice the other email... please ignore :)

Tomas

On Tue, 17 Jul 2018 12:02:35 +0200
Tomáš Golembiovský  wrote:

> ping


-- 
Tomáš Golembiovský 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v3 1/1] qemu: Add entry for balloon stat stat-disk-caches

2018-07-17 Thread Tomáš Golembiovský
Thanks

Tomas

On Mon, 16 Jul 2018 17:45:35 -0400
John Ferlan  wrote:

> On 07/02/2018 07:19 AM, Tomáš Golembiovský wrote:
> > QEMU commit bf1e7140e adds reporting of new balloon statistic to QEMU
> > 2.12. Value represents the amount of memory that can be quickly
> > reclaimed without additional I/O. Let's add that too.
> > 
> > Signed-off-by: Tomáš Golembiovský 
> > ---
> >  include/libvirt/libvirt-domain.h | 9 -
> >  src/libvirt-domain.c | 3 +++
> >  src/qemu/qemu_driver.c   | 1 +
> >  src/qemu/qemu_monitor_json.c | 2 ++
> >  tools/virsh-domain-monitor.c | 2 ++
> >  tools/virsh.pod  | 5 +
> >  6 files changed, 21 insertions(+), 1 deletion(-)
> >   
> 
> Reviewed-by: John Ferlan 
> (and pushed)
> 
> John
> 


-- 
Tomáš Golembiovský 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v3 1/1] qemu: Add entry for balloon stat stat-disk-caches

2018-07-17 Thread Tomáš Golembiovský
ping

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v3 1/1] qemu: Add entry for balloon stat stat-disk-caches

2018-07-02 Thread Tomáš Golembiovský
QEMU commit bf1e7140e adds reporting of new balloon statistic to QEMU
2.12. Value represents the amount of memory that can be quickly
reclaimed without additional I/O. Let's add that too.

Signed-off-by: Tomáš Golembiovský 
---
 include/libvirt/libvirt-domain.h | 9 -
 src/libvirt-domain.c | 3 +++
 src/qemu/qemu_driver.c   | 1 +
 src/qemu/qemu_monitor_json.c | 2 ++
 tools/virsh-domain-monitor.c | 2 ++
 tools/virsh.pod  | 5 +
 6 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 796f2e1408..fdd2d6b8ea 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -628,11 +628,18 @@ typedef enum {
 /* Timestamp of the last update of statistics, in seconds. */
 VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE = 9,
 
+/*
+ * The amount of memory, that can be quickly reclaimed without
+ * additional I/O (in kB). Typically these pages are used for caching files
+ * from disk.
+ */
+VIR_DOMAIN_MEMORY_STAT_DISK_CACHES = 10,
+
 /*
  * The number of statistics supported by this version of the interface.
  * To add new statistics, add them to the enum and increase this value.
  */
-VIR_DOMAIN_MEMORY_STAT_NR  = 10,
+VIR_DOMAIN_MEMORY_STAT_NR  = 11,
 
 # ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_MEMORY_STAT_LAST = VIR_DOMAIN_MEMORY_STAT_NR
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index c71f2e6877..ef39361c95 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -5732,6 +5732,9 @@ virDomainGetInterfaceParameters(virDomainPtr domain,
  * Current balloon value (in kb).
  * VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE
  * Timestamp of the last statistic
+ * VIR_DOMAIN_MEMORY_STAT_DISK_CACHES
+ * Memory that can be reclaimed without additional I/O, typically disk
+ * caches (in kb).
  *
  * Returns: The number of stats provided or -1 in case of failure.
  */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 825b2b27e6..f88fb44373 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -19795,6 +19795,7 @@ qemuDomainGetStatsBalloon(virQEMUDriverPtr driver,
 STORE_MEM_RECORD(RSS, "rss")
 STORE_MEM_RECORD(LAST_UPDATE, "last-update")
 STORE_MEM_RECORD(USABLE, "usable")
+STORE_MEM_RECORD(DISK_CACHES, "disk_caches")
 }
 
 #undef STORE_MEM_RECORD
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 3e90279b71..9d161fe6f4 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2071,6 +2071,8 @@ int qemuMonitorJSONGetMemoryStats(qemuMonitorPtr mon,
   VIR_DOMAIN_MEMORY_STAT_USABLE, 1024);
 GET_BALLOON_STATS(data, "last-update",
   VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE, 1);
+GET_BALLOON_STATS(statsdata, "stat-disk-caches",
+  VIR_DOMAIN_MEMORY_STAT_DISK_CACHES, 1024);
 ret = got;
  cleanup:
 virJSONValueFree(cmd);
diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 87660ee674..b9b4f9739b 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -364,6 +364,8 @@ cmdDomMemStat(vshControl *ctl, const vshCmd *cmd)
 vshPrint(ctl, "rss %llu\n", stats[i].val);
 if (stats[i].tag == VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE)
 vshPrint(ctl, "last_update %llu\n", stats[i].val);
+if (stats[i].tag == VIR_DOMAIN_MEMORY_STAT_DISK_CACHES)
+vshPrint(ctl, "disk_caches %llu\n", stats[i].val);
 }
 
 ret = true;
diff --git a/tools/virsh.pod b/tools/virsh.pod
index dc100db9f3..50799cf588 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -924,6 +924,8 @@ B:
   usable- The amount of memory which can be reclaimed by balloon
 without causing host swapping (in KiB)
   last-update   - Timestamp of the last update of statistics (in seconds)
+  disk_caches   - The amount of memory that can be reclaimed without
+additional I/O, typically disk caches (in KiB)
 
 For QEMU/KVM with a memory balloon, setting the optional I<--period> to a
 value larger than 0 in seconds will allow the balloon driver to return
@@ -1030,6 +1032,9 @@ I<--balloon> returns:
 balloon without causing host swapping (in KiB)
  "balloon.last-update" - timestamp of the last update of statistics
  (in seconds)
+ "balloon.disk_caches " - the amount of memory that can be reclaimed
+  without additional I/O, typically disk
+  caches (in KiB)
 
 I<--vcpu> returns:
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH v3 0/1] qemu: Add entry for balloon stat stat-disk-caches

2018-07-02 Thread Tomáš Golembiovský
v3: changes suggested by John Ferlan
- updated commit message
- fixed units (bytes -> kB) in one comment
- NOTE: Various spelling is used throughout libvirt (kb/kB/KiB) interchangeably
  with the same meaning. I tried to keep the spelling consistent with the
  context around the place where the change occurred.
- updated qemuDomainGetStatsBalloon()
- moved the entry in virsh output to last position
- updated virsh.pod

v2:
- moved item to last position in array
- documented item also in libvirt-domain.c
- added item to virsh listing

Tomáš Golembiovský (1):
  qemu: Add entry for balloon stat stat-disk-caches

 include/libvirt/libvirt-domain.h | 9 -
 src/libvirt-domain.c | 3 +++
 src/qemu/qemu_driver.c   | 1 +
 src/qemu/qemu_monitor_json.c | 2 ++
 tools/virsh-domain-monitor.c | 2 ++
 tools/virsh.pod  | 5 +
 6 files changed, 21 insertions(+), 1 deletion(-)

-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH v2 1/1] qemu: add entry for balloon stat stat-disk-caches

2018-06-08 Thread Tomáš Golembiovský
Signed-off-by: Tomáš Golembiovský 
---
 include/libvirt/libvirt-domain.h | 9 -
 src/libvirt-domain.c | 3 +++
 src/qemu/qemu_monitor_json.c | 2 ++
 tools/virsh-domain-monitor.c | 2 ++
 4 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index da773b76cb..b96c018a90 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -628,11 +628,18 @@ typedef enum {
 /* Timestamp of the last update of statistics, in seconds. */
 VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE = 9,
 
+/*
+ * The amount of memory, in bytes, that can be quickly reclaimed without
+ * additional I/O. Typically these pages are used for caching files from
+ * disk.
+ */
+VIR_DOMAIN_MEMORY_STAT_DISK_CACHES = 10,
+
 /*
  * The number of statistics supported by this version of the interface.
  * To add new statistics, add them to the enum and increase this value.
  */
-VIR_DOMAIN_MEMORY_STAT_NR  = 10,
+VIR_DOMAIN_MEMORY_STAT_NR  = 11,
 
 # ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_MEMORY_STAT_LAST = VIR_DOMAIN_MEMORY_STAT_NR
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index d44b553c74..ca22d13568 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -5732,6 +5732,9 @@ virDomainGetInterfaceParameters(virDomainPtr domain,
  * Current balloon value (in kb).
  * VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE
  * Timestamp of the last statistic
+ * VIR_DOMAIN_MEMORY_STAT_DISK_CACHES
+ * Memory that can be reclaimed without additional I/O, typically disk
+ * caches (in kb).
  *
  * Returns: The number of stats provided or -1 in case of failure.
  */
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index e8a46d2ec3..d62d979b5a 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2071,6 +2071,8 @@ int qemuMonitorJSONGetMemoryStats(qemuMonitorPtr mon,
   VIR_DOMAIN_MEMORY_STAT_USABLE, 1024);
 GET_BALLOON_STATS(data, "last-update",
   VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE, 1);
+GET_BALLOON_STATS(statsdata, "stat-disk-caches",
+  VIR_DOMAIN_MEMORY_STAT_DISK_CACHES, 1024);
 ret = got;
  cleanup:
 virJSONValueFree(cmd);
diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 8cbb3db37c..bad882e87c 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -362,6 +362,8 @@ cmdDomMemStat(vshControl *ctl, const vshCmd *cmd)
 vshPrint(ctl, "actual %llu\n", stats[i].val);
 if (stats[i].tag == VIR_DOMAIN_MEMORY_STAT_RSS)
 vshPrint(ctl, "rss %llu\n", stats[i].val);
+if (stats[i].tag == VIR_DOMAIN_MEMORY_STAT_DISK_CACHES)
+vshPrint(ctl, "disk_caches %llu\n", stats[i].val);
 if (stats[i].tag == VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE)
 vshPrint(ctl, "last_update %llu\n", stats[i].val);
 }
-- 
2.17.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH v2 0/1] qemu: add entry for balloon stat stat-disk-caches

2018-06-08 Thread Tomáš Golembiovský
v2:
- moved item to last position in array
- documented item also in libvirt-domain.c
- added item to virsh listing

Tomáš Golembiovský (1):
  qemu: add entry for balloon stat stat-disk-caches

 include/libvirt/libvirt-domain.h | 9 -
 src/libvirt-domain.c | 3 +++
 src/qemu/qemu_monitor_json.c | 2 ++
 tools/virsh-domain-monitor.c | 2 ++
 4 files changed, 15 insertions(+), 1 deletion(-)

-- 
2.17.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] qemu: add entry for balloon stat stat-disk-caches

2018-06-05 Thread Tomáš Golembiovský
Signed-off-by: Tomáš Golembiovský 
---
 include/libvirt/libvirt-domain.h | 9 -
 src/qemu/qemu_monitor_json.c | 2 ++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index da773b76cb..b96c018a90 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -628,11 +628,18 @@ typedef enum {
 /* Timestamp of the last update of statistics, in seconds. */
 VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE = 9,
 
+/*
+ * The amount of memory, in bytes, that can be quickly reclaimed without
+ * additional I/O. Typically these pages are used for caching files from
+ * disk.
+ */
+VIR_DOMAIN_MEMORY_STAT_DISK_CACHES = 10,
+
 /*
  * The number of statistics supported by this version of the interface.
  * To add new statistics, add them to the enum and increase this value.
  */
-VIR_DOMAIN_MEMORY_STAT_NR  = 10,
+VIR_DOMAIN_MEMORY_STAT_NR  = 11,
 
 # ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_MEMORY_STAT_LAST = VIR_DOMAIN_MEMORY_STAT_NR
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 42d7b9c5e9..b0a65d8af9 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2069,6 +2069,8 @@ int qemuMonitorJSONGetMemoryStats(qemuMonitorPtr mon,
   VIR_DOMAIN_MEMORY_STAT_AVAILABLE, 1024);
 GET_BALLOON_STATS(statsdata, "stat-available-memory",
   VIR_DOMAIN_MEMORY_STAT_USABLE, 1024);
+GET_BALLOON_STATS(statsdata, "stat-disk-caches",
+  VIR_DOMAIN_MEMORY_STAT_DISK_CACHES, 1024);
 GET_BALLOON_STATS(data, "last-update",
   VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE, 1);
 ret = got;
-- 
2.17.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH python] include usable memory in virDomainMemoryStats

2017-08-01 Thread Tomáš Golembiovský
Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
---
 libvirt-override.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/libvirt-override.c b/libvirt-override.c
index 0abfc37..832e05c 100644
--- a/libvirt-override.c
+++ b/libvirt-override.c
@@ -398,6 +398,9 @@ libvirt_virDomainMemoryStats(PyObject *self 
ATTRIBUTE_UNUSED,
 case VIR_DOMAIN_MEMORY_STAT_RSS:
 key = libvirt_constcharPtrWrap("rss");
 break;
+case VIR_DOMAIN_MEMORY_STAT_USABLE:
+key = libvirt_constcharPtrWrap("usable");
+break;
 default:
 continue;
 }
-- 
2.13.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/2] docs: Fix enum documentation

2017-07-23 Thread Tomáš Golembiovský
On Sat, 22 Jul 2017 11:04:32 +0200
Michal Privoznik <mpriv...@redhat.com> wrote:

> So this patch sent to the list got me roll up my sleeves and get working:
> 
> https://www.redhat.com/archives/libvir-list/2017-July/msg00835.html
> 
> It wasn't that bad after all.
> 

Thanks for giving it a try!

There is a slight progress, if we can call it so, but we're not there
yet. Looking at the generated docs for virDomainMemoryStatTags, there is
an issue, although different from the original.

The comment for VIR_DOMAIN_MEMORY_STAT_SWAP_IN is now picked up
correctly. But VIR_DOMAIN_MEMORY_STAT_MINOR_FAULT has now the comment
that belongs to VIR_DOMAIN_MEMORY_STAT_UNUSED.

Tomas

> Michal Privoznik (2):
>   apibuild.py: Handle enum comments properly
>   docs: Span cells if there's not doc text for enum val
> 
>  docs/apibuild.py |  8 +++-
>  docs/newapi.xsl  | 25 +++--
>  2 files changed, 22 insertions(+), 11 deletions(-)
> 
> -- 
> 2.13.0
> 


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] docs: fix documentation of enum constants

2017-07-21 Thread Tomáš Golembiovský
On Fri, 21 Jul 2017 13:26:14 +0200
Michal Privoznik <mpriv...@redhat.com> wrote:

> On 07/21/2017 01:17 PM, Tomáš Golembiovský wrote:
> > On Fri, 21 Jul 2017 12:58:55 +0200
> > Michal Privoznik <mpriv...@redhat.com> wrote:
> >   
> >> On 07/21/2017 12:25 PM, Tomáš Golembiovský wrote:  
> >>> On Fri, 21 Jul 2017 10:12:38 +0200
> >>> Michal Privoznik <mpriv...@redhat.com> wrote:
> >>> 
> >>>> On 07/20/2017 08:21 PM, Tomáš Golembiovský wrote:
> >>>>> The documentation string has to follow the definition of a constant in
> >>>>> the enum. Otherwise, the HTML documentation will be generated
> >>>>> incorrectly.
> >>>>>
> >>>>> Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
> >>>>> ---
> >>>>>  include/libvirt/libvirt-domain.h | 62 
> >>>>> 
> >>>>>  1 file changed, 31 insertions(+), 31 deletions(-)
> >>>>>
> >>>>> diff --git a/include/libvirt/libvirt-domain.h 
> >>>>> b/include/libvirt/libvirt-domain.h
> >>>>> index 45f939a8c..2f3162d0f 100644
> >>>>> --- a/include/libvirt/libvirt-domain.h
> >>>>> +++ b/include/libvirt/libvirt-domain.h
> >>>>> @@ -583,56 +583,56 @@ typedef virDomainInterfaceStatsStruct 
> >>>>> *virDomainInterfaceStatsPtr;
> >>>>>   * Memory Statistics Tags:
> >>>>>   */
> >>>>>  typedef enum {
> >>>>> -/* The total amount of data read from swap space (in kB). */
> >>>>>  VIR_DOMAIN_MEMORY_STAT_SWAP_IN = 0,
> >>>>> -/* The total amount of memory written out to swap space (in kB). */
> >>>>> +/* The total amount of data read from swap space (in kB). */
> >>>>>  VIR_DOMAIN_MEMORY_STAT_SWAP_OUT= 1,
> >>>>> +/* The total amount of memory written out to swap space (in kB). 
> >>>>> */  
> >>>>
> >>>> While this fixes generated HTML, it messes up the header file. So if
> >>>> somebody is looking directly into header file they might get confused.
> >>>> The problem is in our doc generator.
> >>>
> >>> I agree that it's not ideal solution. But since it's already used in the
> >>> header, e.g. in virDomainBlockJobType and
> >>> virDomainEventShutdownDetailType, 
> >>
> >> This one actually looks okay. Did you perhaps mean
> >> virConnectDomainEventDiskChangeReason?  
> > 
> > No, I really meant virDomainEventShutdownDetailType.  
> 
> Are we looking at the same code? Here's what mine looks like:

Damn! We're not. I've been looking at my patched version and
virDomainEventShutdownDetailType was changed by me. Sorry for confusion.

Tomas

> 
> typedef enum {
> /* Guest finished shutdown sequence */
> VIR_DOMAIN_EVENT_SHUTDOWN_FINISHED = 0,
> 
> /* Domain finished shutting down after request from the guest itself
>  * (e.g. hardware-specific action) */
> VIR_DOMAIN_EVENT_SHUTDOWN_GUEST = 1,
> 
> /* Domain finished shutting down after request from the host (e.g. killed 
> by
>  * a signal) */
> VIR_DOMAIN_EVENT_SHUTDOWN_HOST = 2,
> 
> # ifdef VIR_ENUM_SENTINELS
> VIR_DOMAIN_EVENT_SHUTDOWN_LAST
> # endif
> } virDomainEventShutdownDetailType;
> 
> 
> I see nothing wrong about it. Do you?
> 
> Michal


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] docs: fix documentation of enum constants

2017-07-21 Thread Tomáš Golembiovský
On Fri, 21 Jul 2017 12:58:55 +0200
Michal Privoznik <mpriv...@redhat.com> wrote:

> On 07/21/2017 12:25 PM, Tomáš Golembiovský wrote:
> > On Fri, 21 Jul 2017 10:12:38 +0200
> > Michal Privoznik <mpriv...@redhat.com> wrote:
> >   
> >> On 07/20/2017 08:21 PM, Tomáš Golembiovský wrote:  
> >>> The documentation string has to follow the definition of a constant in
> >>> the enum. Otherwise, the HTML documentation will be generated
> >>> incorrectly.
> >>>
> >>> Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
> >>> ---
> >>>  include/libvirt/libvirt-domain.h | 62 
> >>> 
> >>>  1 file changed, 31 insertions(+), 31 deletions(-)
> >>>
> >>> diff --git a/include/libvirt/libvirt-domain.h 
> >>> b/include/libvirt/libvirt-domain.h
> >>> index 45f939a8c..2f3162d0f 100644
> >>> --- a/include/libvirt/libvirt-domain.h
> >>> +++ b/include/libvirt/libvirt-domain.h
> >>> @@ -583,56 +583,56 @@ typedef virDomainInterfaceStatsStruct 
> >>> *virDomainInterfaceStatsPtr;
> >>>   * Memory Statistics Tags:
> >>>   */
> >>>  typedef enum {
> >>> -/* The total amount of data read from swap space (in kB). */
> >>>  VIR_DOMAIN_MEMORY_STAT_SWAP_IN = 0,
> >>> -/* The total amount of memory written out to swap space (in kB). */
> >>> +/* The total amount of data read from swap space (in kB). */
> >>>  VIR_DOMAIN_MEMORY_STAT_SWAP_OUT= 1,
> >>> +/* The total amount of memory written out to swap space (in kB). */  
> >>>   
> >>
> >> While this fixes generated HTML, it messes up the header file. So if
> >> somebody is looking directly into header file they might get confused.
> >> The problem is in our doc generator.  
> > 
> > I agree that it's not ideal solution. But since it's already used in the
> > header, e.g. in virDomainBlockJobType and
> > virDomainEventShutdownDetailType,   
> 
> This one actually looks okay. Did you perhaps mean
> virConnectDomainEventDiskChangeReason?

No, I really meant virDomainEventShutdownDetailType. But you're right
about virConnectDomainEventDiskChangeReason too.

I didn't look at other headers, but as far as libvirt-domain.h is
concerned those three seem to be all. There are also several (5?)
misplaced comments for the sentinels if you care about those too.


> > I assumed it's acceptable. And the
> > overall readability is not that bad because the constant+doc pairs are
> > separated with newline from one another.  
> 
> Nope. It's not. I've sent a patch that fixes those two places (I've went
> through all of our public headers and found just those two though):
> 
> https://www.redhat.com/archives/libvir-list/2017-July/msg00871.html
> 
> > 
> > 
> >   
> >> I recall this being discussed
> >> somewhere recently (probably on the list?). The proper fix IMO is to fix
> >> the generator so that it accepts both:  
> > 
> > That would be the best solution, but I'm too scared to look at the
> > generator. That would be a job for somebody else.  
> 
> Yeah. Our generator is not that great. I wish that we'd switch to
> something already out there so that we don't have to maintain our own
> generator.
> 
> Michal


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] docs: fix documentation of enum constants

2017-07-21 Thread Tomáš Golembiovský
On Fri, 21 Jul 2017 10:12:38 +0200
Michal Privoznik <mpriv...@redhat.com> wrote:

> On 07/20/2017 08:21 PM, Tomáš Golembiovský wrote:
> > The documentation string has to follow the definition of a constant in
> > the enum. Otherwise, the HTML documentation will be generated
> > incorrectly.
> > 
> > Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
> > ---
> >  include/libvirt/libvirt-domain.h | 62 
> > 
> >  1 file changed, 31 insertions(+), 31 deletions(-)
> > 
> > diff --git a/include/libvirt/libvirt-domain.h 
> > b/include/libvirt/libvirt-domain.h
> > index 45f939a8c..2f3162d0f 100644
> > --- a/include/libvirt/libvirt-domain.h
> > +++ b/include/libvirt/libvirt-domain.h
> > @@ -583,56 +583,56 @@ typedef virDomainInterfaceStatsStruct 
> > *virDomainInterfaceStatsPtr;
> >   * Memory Statistics Tags:
> >   */
> >  typedef enum {
> > -/* The total amount of data read from swap space (in kB). */
> >  VIR_DOMAIN_MEMORY_STAT_SWAP_IN = 0,
> > -/* The total amount of memory written out to swap space (in kB). */
> > +/* The total amount of data read from swap space (in kB). */
> >  VIR_DOMAIN_MEMORY_STAT_SWAP_OUT= 1,
> > +/* The total amount of memory written out to swap space (in kB). */  
> 
> While this fixes generated HTML, it messes up the header file. So if
> somebody is looking directly into header file they might get confused.
> The problem is in our doc generator.

I agree that it's not ideal solution. But since it's already used in the
header, e.g. in virDomainBlockJobType and
virDomainEventShutdownDetailType, I assumed it's acceptable. And the
overall readability is not that bad because the constant+doc pairs are
separated with newline from one another.



> I recall this being discussed
> somewhere recently (probably on the list?). The proper fix IMO is to fix
> the generator so that it accepts both:

That would be the best solution, but I'm too scared to look at the
generator. That would be a job for somebody else.

Tomas

> 
> enum {
>   /* Some very long description - therefore it's before the value. */
>   VIR_ENUM_A_VAL = 0,
> } virEnumA;
> 
> enum {
>   VIR_ENUM_B_VAL = 0, /* Short description */
> } virEnumB;
> 
> 
> Michal


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] docs: fix documentation of enum constants

2017-07-20 Thread Tomáš Golembiovský
The documentation string has to follow the definition of a constant in
the enum. Otherwise, the HTML documentation will be generated
incorrectly.

Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
---
 include/libvirt/libvirt-domain.h | 62 
 1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 45f939a8c..2f3162d0f 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -583,56 +583,56 @@ typedef virDomainInterfaceStatsStruct 
*virDomainInterfaceStatsPtr;
  * Memory Statistics Tags:
  */
 typedef enum {
-/* The total amount of data read from swap space (in kB). */
 VIR_DOMAIN_MEMORY_STAT_SWAP_IN = 0,
-/* The total amount of memory written out to swap space (in kB). */
+/* The total amount of data read from swap space (in kB). */
 VIR_DOMAIN_MEMORY_STAT_SWAP_OUT= 1,
+/* The total amount of memory written out to swap space (in kB). */
 
+VIR_DOMAIN_MEMORY_STAT_MAJOR_FAULT = 2,
+VIR_DOMAIN_MEMORY_STAT_MINOR_FAULT = 3,
 /*
  * Page faults occur when a process makes a valid access to virtual memory
  * that is not available.  When servicing the page fault, if disk IO is
  * required, it is considered a major fault.  If not, it is a minor fault.
  * These are expressed as the number of faults that have occurred.
  */
-VIR_DOMAIN_MEMORY_STAT_MAJOR_FAULT = 2,
-VIR_DOMAIN_MEMORY_STAT_MINOR_FAULT = 3,
 
+VIR_DOMAIN_MEMORY_STAT_UNUSED  = 4,
 /*
  * The amount of memory left completely unused by the system.  Memory that
  * is available but used for reclaimable caches should NOT be reported as
  * free.  This value is expressed in kB.
  */
-VIR_DOMAIN_MEMORY_STAT_UNUSED  = 4,
 
+VIR_DOMAIN_MEMORY_STAT_AVAILABLE   = 5,
 /*
  * The total amount of usable memory as seen by the domain.  This value
  * may be less than the amount of memory assigned to the domain if a
  * balloon driver is in use or if the guest OS does not initialize all
  * assigned pages.  This value is expressed in kB.
  */
-VIR_DOMAIN_MEMORY_STAT_AVAILABLE   = 5,
 
-/* Current balloon value (in KB). */
 VIR_DOMAIN_MEMORY_STAT_ACTUAL_BALLOON  = 6,
+/* Current balloon value (in KB). */
 
+VIR_DOMAIN_MEMORY_STAT_RSS = 7,
 /* Resident Set Size of the process running the domain. This value
  * is in kB */
-VIR_DOMAIN_MEMORY_STAT_RSS = 7,
 
+VIR_DOMAIN_MEMORY_STAT_USABLE  = 8,
 /*
  * How much the balloon can be inflated without pushing the guest system
  * to swap, corresponds to 'Available' in /proc/meminfo
  */
-VIR_DOMAIN_MEMORY_STAT_USABLE  = 8,
 
-/* Timestamp of the last update of statistics, in seconds. */
 VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE = 9,
+/* Timestamp of the last update of statistics, in seconds. */
 
+VIR_DOMAIN_MEMORY_STAT_NR  = 10,
 /*
  * The number of statistics supported by this version of the interface.
  * To add new statistics, add them to the enum and increase this value.
  */
-VIR_DOMAIN_MEMORY_STAT_NR  = 10,
 
 # ifdef VIR_ENUM_SENTINELS
 VIR_DOMAIN_MEMORY_STAT_LAST = VIR_DOMAIN_MEMORY_STAT_NR
@@ -683,22 +683,23 @@ typedef enum {
 
 /* Domain migration flags. */
 typedef enum {
+VIR_MIGRATE_LIVE  = (1 << 0),
 /* Do not pause the domain during migration. The domain's memory will
  * be transferred to the destination host while the domain is running.
  * The migration may never converge if the domain is changing its memory
  * faster then it can be transferred. The domain can be manually paused
  * anytime during migration using virDomainSuspend.
  */
-VIR_MIGRATE_LIVE  = (1 << 0),
 
+VIR_MIGRATE_PEER2PEER = (1 << 1),
 /* Tell the source libvirtd to connect directly to the destination host.
  * Without this flag the client (e.g., virsh) connects to both hosts and
  * controls the migration process. In peer-to-peer mode, the source
  * libvirtd controls the migration by calling the destination daemon
  * directly.
  */
-VIR_MIGRATE_PEER2PEER = (1 << 1),
 
+VIR_MIGRATE_TUNNELLED = (1 << 2),
 /* Tunnel migration data over libvirtd connection. Without this flag the
  * source hypervisor sends migration data directly to the destination
  * hypervisor. This flag can only be used when VIR_MIGRATE_PEER2PEER is
@@ -707,26 +708,26 @@ typedef enum {
  * Note the less-common spelling that we're stuck with:
  * VIR_MIGRATE_TUNNELLED should be VIR_MIGRATE_TUNNELED.
  */
-VIR_MIGRATE_TUNNELLED = (1 << 2),
 
+VIR_MIGRATE_PERSIST_DEST  = (1 <

Re: [libvirt] [PATCH 0/2] Add support for 'raw' block driver to Qemu JSON backend parsing code

2017-02-17 Thread Tomáš Golembiovský
Hi,

gentle reminder.

Thanks,

Tomas

On Mon, 13 Feb 2017 23:53:41 +0100
Tomáš Golembiovský <tgole...@redhat.com> wrote:

> The raw driver is not very interesting on its own but it can be layered on top
> of other block drivers. New parameters that allow using only part of the
> underlying file/device were recently added to Qemu. This is useful e.g. to
> directly access partitions inside a disk image or disks inside an archive 
> (like
> OVA).
> 
> This feature is utilised in OVA import in virt-v2v tool where we access the
> disks directly in OVA without needing to unpack the tar archive first. After
> implementing this in virt-v2v we noticed that it does not work when libguestfs
> uses libvirt as a backend because libvirt fails with error:
> 
> internal error: missing parser implementation for JSON backing volume 
> driver 'raw'
> 
> Tomáš Golembiovský (2):
>   util: storage: split function for JSON backing volume parsing in two
>   util: storage: add JSON backing volume parser 'raw' block driver
> 
>  src/util/virstoragefile.c | 51 
> +--
>  tests/virstoragetest.c|  6 ++
>  2 files changed, 47 insertions(+), 10 deletions(-)
> 
> -- 
> 2.11.1
> 


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 2/2] util: storage: add JSON backing volume parser 'raw' block driver

2017-02-15 Thread Tomáš Golembiovský
On Wed, 15 Feb 2017 10:55:24 +
"Richard W.M. Jones" <rjo...@redhat.com> wrote:

> On Tue, Feb 14, 2017 at 10:03:54PM +0100, Tomáš Golembiovský wrote:
> > Hi,
> > 
> > On Tue, 14 Feb 2017 15:58:45 +
> > "Richard W.M. Jones" <rjo...@redhat.com> wrote:
> >   
> > > The patches compile.
> > > 
> > > I looked at both commits and they at least superficially seem
> > > sensible.  I'm not intimately familiar enough with the original code
> > > to review this fully.
> > > 
> > > However I want to try to test this using libguestfs.  I believe the
> > > following test case should be sufficient:
> > > 
> > >   $ cd /var/tmp
> > >   $ truncate -s 1M backing.img
> > >   $ qemu-img create \
> > >   -b 'json:{"driver":"raw", 
> > > "file":{"filename":"/var/tmp/backing.img"}}' \  
> > 
> > The problem lies in the JSON here. Libvirt lacks the driver probing
> > mechanism QEMU has (which makes sense). That means one has to be
> > explicit about the drivers. Try with the following backing definition:
> > 
> > json:{"driver":"raw", "file":{ "driver":"file", 
> > "filename":"/var/tmp/backing.img"}}  
> 
> OK, that works.  However it also works with the unpatched version of
> libvirt, so it's not proof that these patches fix any problem.

Ah, sorry. I didn't notice your JSON was bad from the start and I just
blindly extended it. The correct JSON should look like this:

json: {
"file": {
"driver":"raw",
"file": {
"driver":"file",
"filename":"/var/tmp/backing.img"
}
}
}


Tomas

> 
> Rich.
> 
> -- 
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-df lists disk usage of guests without needing to install any
> software inside the virtual machine.  Supports Linux and Windows.
> http://people.redhat.com/~rjones/virt-df/


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 2/2] util: storage: add JSON backing volume parser 'raw' block driver

2017-02-14 Thread Tomáš Golembiovský
Hi,

On Tue, 14 Feb 2017 15:58:45 +
"Richard W.M. Jones" <rjo...@redhat.com> wrote:

> The patches compile.
> 
> I looked at both commits and they at least superficially seem
> sensible.  I'm not intimately familiar enough with the original code
> to review this fully.
> 
> However I want to try to test this using libguestfs.  I believe the
> following test case should be sufficient:
> 
>   $ cd /var/tmp
>   $ truncate -s 1M backing.img
>   $ qemu-img create \
>   -b 'json:{"driver":"raw", "file":{"filename":"/var/tmp/backing.img"}}' \

The problem lies in the JSON here. Libvirt lacks the driver probing
mechanism QEMU has (which makes sense). That means one has to be
explicit about the drivers. Try with the following backing definition:

json:{"driver":"raw", "file":{ "driver":"file", 
"filename":"/var/tmp/backing.img"}}

Tomas

>   -f qcow2 overlay.qcow2
>   $ guestfish -a /var/tmp/overlay.qcow2 run
>   libguestfs: error: could not create appliance through libvirt.
>   
>   Try running qemu directly without libvirt using this environment variable:
>   export LIBGUESTFS_BACKEND=direct
>   
>   Original error from libvirt: invalid argument: JSON backing volume 
> defintion '{"driver":"raw", "file":{"filename":"/var/tmp/backing.img"}}' 
> lacks driver name [code=8 int1=-1]
> 
> But with libvirt built with your patches:
> 
>   $ killall libvirtd
>   $ ../libvirt/run guestfish -a /var/tmp/overlay.qcow2 run
>   libguestfs: error: could not create appliance through libvirt.
>   
>   Try running qemu directly without libvirt using this environment variable:
>   export LIBGUESTFS_BACKEND=direct
>   
>   Original error from libvirt: invalid argument: JSON backing volume 
> defintion '{"driver":"raw","file":{"filename":"/var/tmp/backing.img"}}' lacks 
> driver name [code=8 int1=-1]
> 
> It could be that my test case is wrong in some way.  I enabled
> debugging and it does appear to be using the new version of libvirt,
> so I'm not sure what's up ...
> 
> Rich.
> 
> -- 
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> libguestfs lets you edit virtual machines.  Supports shell scripting,
> bindings from many languages.  http://libguestfs.org


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH 1/2] util: storage: split function for JSON backing volume parsing in two

2017-02-13 Thread Tomáš Golembiovský
Split virStorageSourceParseBackingJSON into two functions so that the
core can be reused by other functions. The new function called
virStorageSourceParseBackingJSONInternal accepts virJSONValuePtr.

Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
---
 src/util/virstoragefile.c | 35 +--
 1 file changed, 25 insertions(+), 10 deletions(-)

diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index 3d4bda700..3698eeeda 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -3053,29 +3053,28 @@ 
virStorageSourceParseBackingJSONDeflatten(virJSONValuePtr json)
 
 
 static int
-virStorageSourceParseBackingJSON(virStorageSourcePtr src,
- const char *json)
+virStorageSourceParseBackingJSONInternal(virStorageSourcePtr src,
+ virJSONValuePtr json)
 {
-virJSONValuePtr root = NULL;
 virJSONValuePtr fixedroot = NULL;
 virJSONValuePtr file;
 const char *drvname;
 size_t i;
 int ret = -1;
 
-if (!(root = virJSONValueFromString(json)))
-return -1;
-
-if (!(file = virJSONValueObjectGetObject(root, "file"))) {
-if (!(fixedroot = virStorageSourceParseBackingJSONDeflatten(root)))
+if (!(file = virJSONValueObjectGetObject(json, "file"))) {
+if (!(fixedroot = virStorageSourceParseBackingJSONDeflatten(json)))
 goto cleanup;
 
 file = fixedroot;
 }
 
 if (!(drvname = virJSONValueObjectGetString(file, "driver"))) {
+char *str = virJSONValueToString(json, false);
 virReportError(VIR_ERR_INVALID_ARG, _("JSON backing volume defintion "
-  "'%s' lacks driver name"), json);
+  "'%s' lacks driver name"),
+NULLSTR(str));
+VIR_FREE(str);
 goto cleanup;
 }
 
@@ -3091,12 +3090,28 @@ virStorageSourceParseBackingJSON(virStorageSourcePtr 
src,
  "driver '%s'"), drvname);
 
  cleanup:
-virJSONValueFree(root);
 virJSONValueFree(fixedroot);
 return ret;
 }
 
 
+static int
+virStorageSourceParseBackingJSON(virStorageSourcePtr src,
+ const char *json)
+{
+virJSONValuePtr root = NULL;
+int ret = -1;
+
+if (!(root = virJSONValueFromString(json)))
+return -1;
+
+ret = virStorageSourceParseBackingJSONInternal(src, root);
+
+virJSONValueFree(root);
+return ret;
+}
+
+
 virStorageSourcePtr
 virStorageSourceNewFromBackingAbsolute(const char *path)
 {
-- 
2.11.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH 2/2] util: storage: add JSON backing volume parser 'raw' block driver

2017-02-13 Thread Tomáš Golembiovský
The 'raw' block driver in Qemu is not directly interesting from
libvirt's perspective, but it can be layered above some other block
drivers and this may be interesting for the user.

The patch adds support for the 'raw' block driver. The driver is treated
simply as a pass-through and child driver in JSON is queried to get the
necessary information.

Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
---
 src/util/virstoragefile.c | 16 
 tests/virstoragetest.c|  6 ++
 2 files changed, 22 insertions(+)

diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index 3698eeeda..0447016bf 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2648,6 +2648,11 @@ virStorageSourceParseBackingColon(virStorageSourcePtr 
src,
 
 
 static int
+virStorageSourceParseBackingJSONInternal(virStorageSourcePtr src,
+ virJSONValuePtr json);
+
+
+static int
 virStorageSourceParseBackingJSONPath(virStorageSourcePtr src,
  virJSONValuePtr json,
  int type)
@@ -2963,6 +2968,16 @@ virStorageSourceParseBackingJSONRBD(virStorageSourcePtr 
src,
 return -1;
 }
 
+static int
+virStorageSourceParseBackingJSONRaw(virStorageSourcePtr src,
+virJSONValuePtr json,
+int opaque ATTRIBUTE_UNUSED)
+{
+/* There are no interesting attributes in raw driver.
+ * Treat it as pass-through.
+ */
+return virStorageSourceParseBackingJSONInternal(src, json);
+}
 
 struct virStorageSourceJSONDriverParser {
 const char *drvname;
@@ -2985,6 +3000,7 @@ static const struct virStorageSourceJSONDriverParser 
jsonParsers[] = {
 {"sheepdog", virStorageSourceParseBackingJSONSheepdog, 0},
 {"ssh", virStorageSourceParseBackingJSONSSH, 0},
 {"rbd", virStorageSourceParseBackingJSONRBD, 0},
+{"raw", virStorageSourceParseBackingJSONRaw, 0},
 };
 
 
diff --git a/tests/virstoragetest.c b/tests/virstoragetest.c
index f766df115..1bf490c43 100644
--- a/tests/virstoragetest.c
+++ b/tests/virstoragetest.c
@@ -1492,6 +1492,12 @@ mymain(void)
"\n"
"  \n"
"\n");
+TEST_BACKING_PARSE("json:{ \"file\": { "
+"\"driver\": \"raw\","
+"\"file\": {"
+"\"driver\": \"file\","
+"\"filename\": \"/path/to/file\" } } }",
+   "\n");
 
  cleanup:
 /* Final cleanup */
-- 
2.11.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH 0/2] Add support for 'raw' block driver to Qemu JSON backend parsing code

2017-02-13 Thread Tomáš Golembiovský
The raw driver is not very interesting on its own but it can be layered on top
of other block drivers. New parameters that allow using only part of the
underlying file/device were recently added to Qemu. This is useful e.g. to
directly access partitions inside a disk image or disks inside an archive (like
OVA).

This feature is utilised in OVA import in virt-v2v tool where we access the
disks directly in OVA without needing to unpack the tar archive first. After
implementing this in virt-v2v we noticed that it does not work when libguestfs
uses libvirt as a backend because libvirt fails with error:

internal error: missing parser implementation for JSON backing volume 
driver 'raw'

Tomáš Golembiovský (2):
  util: storage: split function for JSON backing volume parsing in two
  util: storage: add JSON backing volume parser 'raw' block driver

 src/util/virstoragefile.c | 51 +--
 tests/virstoragetest.c|  6 ++
 2 files changed, 47 insertions(+), 10 deletions(-)

-- 
2.11.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] valgrind: add suppression for bash memory leak

2017-02-13 Thread Tomáš Golembiovský
Add suppression for memory leak in bash observerd with bash 4.4.011 on
Arch Linux.

Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
---
 tests/.valgrind.supp | 9 +
 1 file changed, 9 insertions(+)

diff --git a/tests/.valgrind.supp b/tests/.valgrind.supp
index d4fef857b..d474d63e9 100644
--- a/tests/.valgrind.supp
+++ b/tests/.valgrind.supp
@@ -32,6 +32,15 @@
...
obj:*/bin/bash
 }
+{
+   bashMemoryLeak4
+   Memcheck:Leak
+   match-leak-kinds: definite
+   fun:malloc
+   fun:xmalloc
+   fun:set_default_locale
+   fun:main
+}
 #
 # Failure seen in /usr/lib64/ld-2.15.so
 #
-- 
2.11.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] qemu-kvm blocked for more than 120 seconds when "libvirt-guests" is enabled

2017-01-02 Thread Tomáš Golembiovský
Hi,

this is just a guess, but from the screenshot I see that NFS is already
down. Are your VMs local or do you use NFS storage or some other network
storage pool? That would be a reason for qemu process to hang.

Best regards,

Tomas

On Mon, 2 Jan 2017 12:49:55 +0100
Oscar Segarra <oscar.sega...@gmail.com> wrote:

> Hi, anybody has experienced the same issue?
> 
> Thanks a lot!
> 
> El 31 dic. 2016 5:25 p. m., "Oscar Segarra" <oscar.sega...@gmail.com>
> escribió:
> 
> > Hi,
> >
> > I just have two virtual machines in my environment, I want them to
> > gracefully stop when host is powered off gracefully.
> >
> > Nevertheless, system hangs on shutdown:
> >
> > [root@vdicnode01 ~]# virsh list
> >  IdName   State
> > 
> >  1 vdicdb01   running
> >  2 vdicone01  running
> >
> > The configuration:
> >
> > /usr/libexec/libvirt-guests.sh
> > URIS=default
> > ON_BOOT=ignore
> > ON_SHUTDOWN=shutdown
> > SHUTDOWN_TIMEOUT=60
> > PARALLEL_SHUTDOWN=5
> > START_DELAY=0
> > BYPASS_CACHE=0
> > CONNECT_RETRIES=10
> > RETRIES_SLEEP=1
> > SYNC_TIME=0
> >
> > [root@vdicnode01 ~]# cat /etc/libvirt/qemu.conf
> > user  = "oneadmin"
> > group = "oneadmin"
> > dynamic_ownership = 0
> > spice_tls = 0
> >
> > With libvirt-guests service started:
> > [root@vdicnode01 ~]# service libvirt-guests status
> > Redirecting to /bin/systemctl status  libvirt-guests.service
> > ● libvirt-guests.service - Suspend Active Libvirt Guests
> >Loaded: loaded (/usr/lib/systemd/system/libvirt-guests.service;
> > disabled; vendor preset: disabled)
> >Active: active (exited) since Sat 2016-12-31 17:13:34 CET; 5s ago
> >  Docs: man:libvirtd(8)
> >http://libvirt.org
> >   Process: 6619 ExecStart=/usr/libexec/libvirt-guests.sh start
> > (code=exited, status=0/SUCCESS)
> >  Main PID: 6619 (code=exited, status=0/SUCCESS)
> >
> > Dec 31 17:13:34 vdicnode01.vdicube.com systemd[1]: Starting Suspend
> > Active Libvirt Guests...
> > Dec 31 17:13:34 vdicnode01.vdicube.com libvirt-guests.sh[6619]:
> > libvirt-guests is configured not to start any guests on boot
> > Dec 31 17:13:34 vdicnode01.vdicube.com systemd[1]: Started Suspend Active
> > Libvirt Guests.
> > [root@vdicnode01 ~]#
> >
> > If I stop locally libvirt-guests (as root) Looks work perfectly:
> >
> > [root@vdicnode01 ~]# virsh list
> >  IdName   State
> > 
> >
> > [root@vdicnode01 ~]#
> >  
> > --> Now I start again the virtual machines and the libvirt-guests service  
> > <--
> > [root@vdicnode01 ~]# service libvirtd restart
> > Redirecting to /bin/systemctl restart  libvirtd.service
> > [root@vdicnode01 ~]# service libvirt-guests start
> > Redirecting to /bin/systemctl start  libvirt-guests.service
> > [root@vdicnode01 ~]# virsh list
> >  IdName   State
> > 
> >  1 vdicone01  running
> >  2 vdicdb01   running
> >
> > [root@vdicnode01 ~]#
> >
> > But If I shutdown the host it look not work properly (I attack screenshots
> > of the shutdown process).
> >
> > Any help will be welcome.
> >
> > Thanks a lot.
> >
> >  


-- 
Tomáš Golembiovský <tgole...@redhat.com>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] esx: Fetch snapshot info directly for filtering

2016-07-11 Thread Tomáš Golembiovský
When fetching domains with virConnectListAllDomains() and when filtering
by snapshot existence is requested the ESX driver first lists all the
domains and then check one-by-one for snapshot existence. This process
takes unnecessarily long time.

To significantly improve the time necessary to finish the query we can
request the snapshot related info directly when querying the list of
domains from VMware.

Signed-off-by: Tomáš Golembiovský <tgole...@redhat.com>
---
 src/esx/esx_driver.c | 22 +++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index eae015a..3d90b69 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -4924,6 +4924,7 @@ esxConnectListAllDomains(virConnectPtr conn,
 int count = 0;
 bool autostart;
 int state;
+esxVI_DynamicProperty *dynamicProperty = NULL;
 
 virCheckFlags(VIR_CONNECT_LIST_DOMAINS_FILTERS_ALL, -1);
 
@@ -4985,6 +4986,13 @@ esxConnectListAllDomains(virConnectPtr conn,
 }
 }
 
+if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_SNAPSHOT)) {
+if (esxVI_String_AppendValueToList(,
+   "snapshot.rootSnapshotList") < 0) {
+goto cleanup;
+}
+}
+
 if (esxVI_LookupVirtualMachineList(priv->primary, propertyNameList,
) < 0)
 goto cleanup;
@@ -5023,11 +5031,19 @@ esxConnectListAllDomains(virConnectPtr conn,
 
 /* filter by snapshot existence */
 if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_SNAPSHOT)) {
+
 esxVI_VirtualMachineSnapshotTree_Free();
 
-if (esxVI_LookupRootSnapshotTreeList(priv->primary, uuid,
- ) < 0) {
-goto cleanup;
+for (dynamicProperty = virtualMachine->propSet; dynamicProperty;
+dynamicProperty = dynamicProperty->_next) {
+if (STREQ(dynamicProperty->name, "snapshot.rootSnapshotList")) 
{
+if (esxVI_VirtualMachineSnapshotTree_CastListFromAnyType
+(dynamicProperty->val, ) < 0) {
+goto cleanup;
+}
+
+break;
+}
 }
 
 if (!((MATCH(VIR_CONNECT_LIST_DOMAINS_HAS_SNAPSHOT) &&
-- 
2.9.0


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list