Re: [RFC] Adding docker driver to libvirt

2020-04-14 Thread nshirokovskiy



On 12.04.2020 12:39, Martin Kletzander wrote:
> On Thu, Apr 09, 2020 at 03:30:11PM +0300, nshirokovskiy wrote:
>> Hi, all.
>>
>> Does it make sense to add such a driver? I can't say I have a big picture
>> of docker functionality in mind but at least container lifecycle management
>> and container networking are common to both.
>>
> 
> I think we had something in virt-tools that was able to pull an image from
> docker hub and run it with lxc.  Or was it part of sandbox?  I don't know.
> 
> Anyway, what would be the benefit of that?
> 

We wanted to add Windows containers to the libvirt API. They are available
under docker API thus the idea to add a docker driver. The docker itself
uses some API to manage Windows containers but this API lacks documentation
thus again the willingness to use just docker API to bring Windows containers
to libvirt.

Nikolay  




[RFC] Adding docker driver to libvirt

2020-04-09 Thread nshirokovskiy
Hi, all.

Does it make sense to add such a driver? I can't say I have a big picture
of docker functionality in mind but at least container lifecycle management
and container networking are common to both.

Nikolay




Re: [libvirt] [PATCH v2 1/1] qemu: hide details of fake reboot

2020-04-08 Thread nshirokovskiy



On 08.04.2020 16:08, Don Koch wrote:
> On Wed, 8 Apr 2020 10:34:08 +0300
> nshirokovskiy wrote:
> 
>>
>>
>> On 07.04.2020 20:31, Don Koch wrote:
>>> On Thu, 19 Dec 2019 13:50:19 +0300
>>> Nikolay Shirokovskiy wrote:
>>>
>>>> If we use fake reboot then domain goes thru running->shutdown->running
>>>> state changes with shutdown state only for short period of time.  At
>>>> least this is implementation details leaking into API. And also there is
>>>> one real case when this is not convinient. I'm doing a backup with the
>>>> help of temporary block snapshot (with the help of qemu's API which is
>>>> used in the newly created libvirt's backup API). If guest is shutdowned
>>>> I want to continue to backup so I don't kill the process and domain is
>>>> in shutdown state. Later when backup is finished I want to destroy qemu
>>>> process. So I check if it is in shutdowned state and destroy it if it
>>>> is. Now if instead of shutdown domain got fake reboot then I can destroy
>>>> process in the middle of fake reboot process.
>>>>
>>>> After shutdown event we also get stop event and now as domain state is
>>>> running it will be transitioned to paused state and back to running
>>>> later. Though this is not critical for the described case I guess it is
>>>> better not to leak these details to user too. So let's leave domain in
>>>> running state on stop event if fake reboot is in process.
>>>>
>>>> Reconnection code handles this patch without modification. It detects
>>>> that qemu is not running due to shutdown and then calls 
>>>> qemuProcessShutdownOrReboot
>>>> which reboots as fake reboot flag is set.
>>>>
>>>> Signed-off-by: Nikolay Shirokovskiy 
>>>
>>> Question regarding this change: in the on-line documentation for 
>>> virDomainReboot(),
>>> it states:
>>>
>>>Due to implementation limitations in some drivers (the qemu driver,
>>>for instance) it is not advised to migrate or save a guest that is
>>>rebooting as a result of this API. Migrating such a guest can lead to a
>>>plain shutdown on the destination.
>>>
>>> Is this still the case? 
>>
>> Hi, Don. Yeah this is still the case.
>>
>>> In any event, how does one know when the reboot has
>>> finished (assuming the VM reacted to it)?
>>
>> Unfortunately there is no event for reboot. Before the patch there was
>> SHUTDOWN event but only when reboot is done thru ACPI. Now when fake
>> reboot impl is more incapsulated there is no more SHUTDOWN event just as
>> for reboot from guest.
>>  
>> Nikolay
>>
> There was also a startup/resume event. Maybe add a reboot event at that
> point? I don't think anyone really cares about when the shutdown occurs
> but rather know about the resume.

The point of the patch was to hide these details from client. Just like
in case of internal reboot - you will not see suspend/resume events.

> 
> A side question is: is there a problem with doing a migration if, say,
> the reboot was done internally (i.e. someone issued a "reboot" command
> from within the VM)?
> 

AFAIU from libvirt POV there should be no issues. I can's say for QEMU though.

Nikolay




Re: [libvirt] [PATCH v2 1/1] qemu: hide details of fake reboot

2020-04-08 Thread nshirokovskiy



On 07.04.2020 20:31, Don Koch wrote:
> On Thu, 19 Dec 2019 13:50:19 +0300
> Nikolay Shirokovskiy wrote:
> 
>> If we use fake reboot then domain goes thru running->shutdown->running
>> state changes with shutdown state only for short period of time.  At
>> least this is implementation details leaking into API. And also there is
>> one real case when this is not convinient. I'm doing a backup with the
>> help of temporary block snapshot (with the help of qemu's API which is
>> used in the newly created libvirt's backup API). If guest is shutdowned
>> I want to continue to backup so I don't kill the process and domain is
>> in shutdown state. Later when backup is finished I want to destroy qemu
>> process. So I check if it is in shutdowned state and destroy it if it
>> is. Now if instead of shutdown domain got fake reboot then I can destroy
>> process in the middle of fake reboot process.
>>
>> After shutdown event we also get stop event and now as domain state is
>> running it will be transitioned to paused state and back to running
>> later. Though this is not critical for the described case I guess it is
>> better not to leak these details to user too. So let's leave domain in
>> running state on stop event if fake reboot is in process.
>>
>> Reconnection code handles this patch without modification. It detects
>> that qemu is not running due to shutdown and then calls 
>> qemuProcessShutdownOrReboot
>> which reboots as fake reboot flag is set.
>>
>> Signed-off-by: Nikolay Shirokovskiy 
> 
> Question regarding this change: in the on-line documentation for 
> virDomainReboot(),
> it states:
> 
>Due to implementation limitations in some drivers (the qemu driver,
>for instance) it is not advised to migrate or save a guest that is
>rebooting as a result of this API. Migrating such a guest can lead to a
>plain shutdown on the destination.
> 
> Is this still the case? 

Hi, Don. Yeah this is still the case.

> In any event, how does one know when the reboot has
> finished (assuming the VM reacted to it)?

Unfortunately there is no event for reboot. Before the patch there was
SHUTDOWN event but only when reboot is done thru ACPI. Now when fake
reboot impl is more incapsulated there is no more SHUTDOWN event just as
for reboot from guest.
 
Nikolay




Re: [RFC] Faster libvirtd restart with nwfilter rules, one more time

2020-04-06 Thread nshirokovskiy
ping

On 20.03.2020 12:25, nshirokovskiy wrote:
> Hi, all.  
>   
>   
>   
> Some time ago I posted RFC [1] concerning an issue of unresponsive
>   
> libvird during restart if there is large number of VMs that have network  
>   
> filters on their interfaces. It was identified that in most cases we  
>   
> don't need actually to reinstall network filter rules on daemon restart.  
>   
> Thus I proposed patches [2] that check whether we need to reapply rules   
>   
> or not.   
>   
>   
>   
> The first version has a drawback that daemon won't reapply rules if   
>   
> someone mangled them between daemon stop and start (and this can be done  
>   
> just by restarting firewalld). The second one is just ugly :) 
>   
>   
>   
> Around that time Florian Westphal in a letter off the mailing list
>   
> suggested to use {iptables|ebtables}-restore to apply rules in one
>   
> binary call. These binaries has --noflush option so that we won't reset   
>   
> current state of tables.  We also need one more -L call for   
>   
> iptables/ebtables to query current filter state to be able to construct   
>   
> input for restore binaries.   
>   
>   
>   
> I wonder can we use this approach? I see currently only one issue - we
>   
> won't use firealld to spawn rules. But why we need to spawn rules 
>   
> through firewalld if it present? We use passthrough mode anyway. I tried  
>   
> to dig history for hints but didn't found anything. Patch [3] introduced  
>   
> spawning rules thru firewalld-cmd.
>   
>   
>   
> Nikolay   
>   
>   
>   
> [1] [RFC] Faster libvirtd restart with nwfilter rules 
>   
> https://www.redhat.com/archives/libvir-list/2018-September/msg01206.html  
>   
>   
>   
> [2] nwfilter: don't reinstantiate filters if they are not changed 
>   
> v1: https://www.redhat.com/archives/libvir-list/2018-October/msg00904.html
>   
> v2: https://www.redhat.com/archives/libvir-list/2018-October/msg01317.html
>   
>   
>   
> [3] network: use firewalld instead of iptables, when available
>   
> v0: https://www.redhat.com/archives/libvir-list/2012-April/msg01236.html  
>   
> v1: https://www.redhat.com/archives/libvir-list/2012-August/msg00447.html 
>   
> ...   
>   
> v4: https://www.redhat.com/archives/libvir-list/2012-August/msg01097.html  
> 
> 




Re: RFC: qemu: use uuid instead of name for misc filenames

2020-03-30 Thread nshirokovskiy



On 30.03.2020 13:41, Daniel P. Berrangé wrote:
> On Sun, Mar 29, 2020 at 02:33:41PM +0300, nshirokovskiy wrote:
>>
>>
>> On 26.03.2020 20:50, Daniel P. Berrangé wrote:
>>> On Fri, Feb 28, 2020 at 10:09:41AM +0300, Nikolay Shirokovskiy wrote:
>>>> On 27.02.2020 16:48, Daniel P. Berrangé wrote:
>>>>> On Thu, Feb 27, 2020 at 03:57:04PM +0300, Nikolay Shirokovskiy wrote:
>>>>>> Hi, everyone.
>>>>>>
>>>>>>  
>>>>>>
>>>>>> I'm working on supporting domain renaming when it has snapshots which is 
>>>>>> not
>>>>>> supported now. And it strikes me things will be much simplier to manage 
>>>>>> on  
>>>>>> renaming if we use uuid in filenames instead of domain names.
>>>>>>  
>>>>>>
>>>
>>>
>>>
>>>>>> 4. No issues with long domain names and filename length limit
>>>>>>
>>>>>>  
>>>>>>
>>>>>> If the above conversion makes sense I guess the good time to apply it is 
>>>>>>
>>>>>> on domain start (and rename to support renaming with snapshots). 
>>>>>>
>>>>>
>>>>> The above has not considered the benefit that using the VM name
>>>>> has. Essentially the UUID is good for machines, the VM name is
>>>>> good for humans.  Seeing the guest XML files, or VM log files
>>>>> using a filename based on UUID instead of name is a *really*
>>>>> unappealing idea to me. 
>>>>
>>>> I agree. But we can also keep symlinks with domain names for configs/logs 
>>>> etc
>>>> This can be done as a separate tool as I suggested in the letter or 
>>>> maintain
>>>> symlinks always. The idea is failing in this symlinking won't affect daemon
>>>> functionality as symlinks are for humans)
>>>
>>> I've just realized that there is potential overlap between what we're
>>> discussing in this thread, and in the thread about localhost migration:
>>>
>>>   https://www.redhat.com/archives/libvir-list/2020-February/msg00061.html
>>>
>>> In the localhost migration case, we need to be able to startup a new
>>> guest with the same name as an existing guest.  The way we can achieve
>>> that is by thinking of localhost migration as being a pair of domain
>>> rename operations.
>>>
>>> ie, consider guest "foo" we want to localhost-migrate
>>>
>>>  - Start target guest "foo-incoming"
>>>  - Run live migration from "foo" -> "foo-incoming"
>>>  - Migration completes, CPUs stop
>>>  - Rename "foo" to "foo-outgoing"
>>>  - Rename "foo-incoming" to "foo"
>>>  - Tidy up migration state
>>>  - Destroy source guest "foo-outgoing"
>>
>> I think local migration does not fit really nicely in this scheme:
>>
>> - one can not treat outgoing and incoming VMs as just regular VMs as
>>   one can not put them into same list as they have same UUID
> 
> Yes, that is a tricky issue, but one that we need to solve, as the need
> to have a completely separate of list VMs is the thing I dislike the
> most about the local migration patches.
> 
> One option is to make the target VM have a different UUID by pickling
> its UUID. eg have a migration UUID generated on daemon startup.
> 0466e1ae-a71a-4e75-89ca-c3591a4cf220.  Then XOR this migration UUID
> with the source VM's UUID. So during live migration the target VM
> will appear with this XOR'd UUID, and once completed, it will get
> the real UUID again.
> 
> A different option is to not keep the target VM in the domain list
> at all. Instead  virDomainObjPtr, could have a pointer to a second
> virDomainObjPtr which stores the target VM temporarily.

Both choices have its issues/advantages

With the first approach incoming VM is visible as regular one. This
can be beneficial that one can inspect the VM in debug purpuose just
like regular one. On the other hand the appearance of the VM can 
be unexpected to mgmt thus some may mgmt even try to destro

Re: RFC: qemu: use uuid instead of name for misc filenames

2020-03-29 Thread nshirokovskiy



On 26.03.2020 20:50, Daniel P. Berrangé wrote:
> On Fri, Feb 28, 2020 at 10:09:41AM +0300, Nikolay Shirokovskiy wrote:
>> On 27.02.2020 16:48, Daniel P. Berrangé wrote:
>>> On Thu, Feb 27, 2020 at 03:57:04PM +0300, Nikolay Shirokovskiy wrote:
 Hi, everyone.  
  

  
 I'm working on supporting domain renaming when it has snapshots which is 
 not
 supported now. And it strikes me things will be much simplier to manage on 
  
 renaming if we use uuid in filenames instead of domain names.

  
> 
> 
> 
 4. No issues with long domain names and filename length limit  
  

  
 If the above conversion makes sense I guess the good time to apply it is   
  
 on domain start (and rename to support renaming with snapshots).   
  
>>>
>>> The above has not considered the benefit that using the VM name
>>> has. Essentially the UUID is good for machines, the VM name is
>>> good for humans.  Seeing the guest XML files, or VM log files
>>> using a filename based on UUID instead of name is a *really*
>>> unappealing idea to me. 
>>
>> I agree. But we can also keep symlinks with domain names for configs/logs etc
>> This can be done as a separate tool as I suggested in the letter or maintain
>> symlinks always. The idea is failing in this symlinking won't affect daemon
>> functionality as symlinks are for humans)
> 
> I've just realized that there is potential overlap between what we're
> discussing in this thread, and in the thread about localhost migration:
> 
>   https://www.redhat.com/archives/libvir-list/2020-February/msg00061.html
> 
> In the localhost migration case, we need to be able to startup a new
> guest with the same name as an existing guest.  The way we can achieve
> that is by thinking of localhost migration as being a pair of domain
> rename operations.
> 
> ie, consider guest "foo" we want to localhost-migrate
> 
>  - Start target guest "foo-incoming"
>  - Run live migration from "foo" -> "foo-incoming"
>  - Migration completes, CPUs stop
>  - Rename "foo" to "foo-outgoing"
>  - Rename "foo-incoming" to "foo"
>  - Tidy up migration state
>  - Destroy source guest "foo-outgoing"

I think local migration does not fit really nicely in this scheme:

- one can not treat outgoing and incoming VMs as just regular VMs as
  one can not put them into same list as they have same UUID
- it is not just mere rename. In example reflected in [1] the path
  given by mgmt is not subjected to rename operation. The switch
  has to be done by local migration specific code.


[1] https://www.redhat.com/archives/libvir-list/2020-February/msg01026.html

> 
> 
> In both this thread and the localhost migration thread, we seem to have
> come towards a view that symlinks are the only viable way to deal with
> the naming problem for resources on disk that are based on VM name.
> 
> IOW, it would be desirable if whatever solution we take for symlink mgmt
> will solve the localhost migration and domain rename problems at the same
> time.

Agree, symlinks approach itself seems to work well in both cases.
We can use naming scheme like UUID-gen for "stable" paths to fit both rename 
and local
migration cases. Gen here is for generation, like 1 for domain after first
local migration, 2 after second and so on.

I already has a pending patch series [2] to remove some limitation on renaming.
Can I treat this letter as some agreement that it is useful to move
current paths naming towards "uuid based real path" + "name based symlinks" 
approach?

[2] https://www.redhat.com/archives/libvir-list/2020-March/msg00018.html


Nikolay




[RFC] Faster libvirtd restart with nwfilter rules, one more time

2020-03-20 Thread nshirokovskiy
Hi, all.

Some time ago I posted RFC [1] concerning an issue of unresponsive  
libvird during restart if there is large number of VMs that have network
filters on their interfaces. It was identified that in most cases we
don't need actually to reinstall network filter rules on daemon restart.
Thus I proposed patches [2] that check whether we need to reapply rules 
or not. 

The first version has a drawback that daemon won't reapply rules if 
someone mangled them between daemon stop and start (and this can be done
just by restarting firewalld). The second one is just ugly :)   

Around that time Florian Westphal in a letter off the mailing list  
suggested to use {iptables|ebtables}-restore to apply rules in one  
binary call. These binaries has --noflush option so that we won't reset 
current state of tables.  We also need one more -L call for 
iptables/ebtables to query current filter state to be able to construct 
input for restore binaries. 

I wonder can we use this approach? I see currently only one issue - we  
won't use firealld to spawn rules. But why we need to spawn rules   
through firewalld if it present? We use passthrough mode anyway. I tried
to dig history for hints but didn't found anything. Patch [3] introduced
spawning rules thru firewalld-cmd.  

Nikolay 

[1] [RFC] Faster libvirtd restart with nwfilter rules   
https://www.redhat.com/archives/libvir-list/2018-September/msg01206.html

[2] nwfilter: don't reinstantiate filters if they are not changed   
v1: https://www.redhat.com/archives/libvir-list/2018-October/msg00904.html  
v2: https://www.redhat.com/archives/libvir-list/2018-October/msg01317.html  

[3] network: use firewalld instead of iptables, when available  
v0: https://www.redhat.com/archives/libvir-list/2012-April/msg01236.html
v1: https://www.redhat.com/archives/libvir-list/2012-August/msg00447.html   
... 
v4: https://www.redhat.com/archives/libvir-list/2012-August/msg01097.html  




Re: [PATCH 0/8] qemu: support renaming domains with snapshots/checkpoints

2020-03-16 Thread nshirokovskiy
ping

On 03.03.2020 11:19, Nikolay Shirokovskiy wrote:
> Nikolay Shirokovskiy (8):
>   qemu: remove duplicate code for removing remnant files
>   qemu: qemuDomainRenameCallback: fix sending false undefined event
>   qemu: rename: support renaming snapshots directory
>   qemu: rename: support renaming checkpoints directory
>   qemu: update name on reverting from snapshot
>   qemu: rename: remove snapshot/checkpoint restriction
>   qemu: refactor qemuDomainDefineXMLFlags
>   qemu: qemu: remove remnant files on define
> 
>  src/qemu/qemu_checkpoint.c |   2 +-
>  src/qemu/qemu_checkpoint.h |   6 ++
>  src/qemu/qemu_domain.c |  41 ++
>  src/qemu/qemu_domain.h |   5 ++
>  src/qemu/qemu_driver.c | 161 ++---
>  src/qemu/qemu_migration.c  |   3 +
>  6 files changed, 153 insertions(+), 65 deletions(-)
> 




[libvirt] [PATCH v3 3/5] vz: support domain rename on migrate

2015-08-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |6 ++
 src/vz/vz_sdk.c|   16 +---
 src/vz/vz_sdk.h|5 -
 3 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index f82fff8..dc26b09 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1467,8 +1467,6 @@ vzMakeVzUri(const char *connuri_str)
 
 #define VZ_MIGRATION_FLAGS (0)
 
-#define VZ_MIGRATION_PARAMETERS (NULL)
-
 static int
 vzDomainMigratePerform3(virDomainPtr domain,
 const char *xmlin ATTRIBUTE_UNUSED,
@@ -1479,7 +1477,7 @@ vzDomainMigratePerform3(virDomainPtr domain,
 const char *dconnuri ATTRIBUTE_UNUSED,
 const char *uri,
 unsigned long flags,
-const char *dname ATTRIBUTE_UNUSED,
+const char *dname,
 unsigned long bandwidth ATTRIBUTE_UNUSED)
 {
 int ret = -1;
@@ -1515,7 +1513,7 @@ vzDomainMigratePerform3(virDomainPtr domain,
 if (vzParseCookie(cookie, session_uuid)  0)
 goto cleanup;
 
-if (prlsdkMigrate(dom, vzuri, session_uuid)  0)
+if (prlsdkMigrate(dom, vzuri, session_uuid, dname)  0)
 goto cleanup;
 
 virDomainObjListRemove(privconn-domains, dom);
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 783438d..89a2429 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -4064,7 +4064,8 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
 #define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
 
 int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
-  const unsigned char *session_uuid)
+  const unsigned char *session_uuid,
+  const char *dname)
 {
 int ret = -1;
 vzDomObjPtr privdom = dom-privateData;
@@ -4072,12 +4073,13 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
 char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
 
 prlsdkUUIDFormat(session_uuid, uuidstr);
-job = PrlVm_MigrateEx(privdom-sdkdom, uri-server, uri-port, uuidstr,
-  , /* use default dir for migrated instance bundle 
*/
-  PRLSDK_MIGRATION_FLAGS,
-  0, /* reserved flags */
-  PRL_TRUE /* don't ask for confirmations */
-  );
+job = PrlVm_MigrateWithRenameEx(privdom-sdkdom, uri-server, uri-port, 
uuidstr,
+dname == NULL ?  : dname,
+, /* use default dir for migrated 
instance bundle */
+PRLSDK_MIGRATION_FLAGS,
+0, /* reserved flags */
+PRL_TRUE /* don't ask for confirmations */
+);
 
 if (PRL_FAILED(waitJob(job)))
 goto cleanup;
diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h
index d3f0caf..0aa70b3 100644
--- a/src/vz/vz_sdk.h
+++ b/src/vz/vz_sdk.h
@@ -77,4 +77,7 @@ prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned 
long long *time);
 int
 prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, 
unsigned int nr_stats);
 int
-prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned 
*session_uuid);
+prlsdkMigrate(virDomainObjPtr dom,
+  virURIPtr uri,
+  const char unsigned *session_uuid,
+  const char *dname);
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v3 5/5] vz: cleanup: define vz format of uuids

2015-08-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

vz puts uuids into curly braces. Simply introduce new contstant to reflect this
and get rid of magic +2 in code.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_sdk.c   |   12 ++--
 src/vz/vz_utils.h |2 ++
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 9a2b5df..bafc6e4 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -236,7 +236,7 @@ prlsdkConnect(vzConnPtr privconn)
 PRL_HANDLE job = PRL_INVALID_HANDLE;
 PRL_HANDLE result = PRL_INVALID_HANDLE;
 PRL_HANDLE response = PRL_INVALID_HANDLE;
-char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+char session_uuid[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
 
 pret = PrlSrv_Create(privconn-server);
@@ -316,7 +316,7 @@ prlsdkUUIDFormat(const unsigned char *uuid, char *uuidstr)
 static PRL_HANDLE
 prlsdkSdkDomainLookupByUUID(vzConnPtr privconn, const unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_HANDLE sdkdom = PRL_INVALID_HANDLE;
 
 prlsdkUUIDFormat(uuid, uuidstr);
@@ -365,7 +365,7 @@ prlsdkGetDomainIds(PRL_HANDLE sdkdom,
char **name,
unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 len;
 PRL_RESULT pret;
 
@@ -1722,7 +1722,7 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR 
opaque)
 vzConnPtr privconn = opaque;
 PRL_RESULT pret = PRL_ERR_FAILURE;
 PRL_HANDLE_TYPE handleType;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 unsigned char uuid[VIR_UUID_BUFLEN];
 PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr);
 PRL_EVENT_TYPE prlEventType;
@@ -3480,7 +3480,7 @@ prlsdkDoApplyConfig(virConnectPtr conn,
 {
 PRL_RESULT pret;
 size_t i;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 bool needBoot = true;
 char *mask = NULL;
 
@@ -4070,7 +4070,7 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
 int ret = -1;
 vzDomObjPtr privdom = dom-privateData;
 PRL_HANDLE job = PRL_INVALID_HANDLE;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS;
 
 if (flags  VIR_MIGRATE_PAUSED)
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index fe54b25..2a59426 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -55,6 +55,8 @@
 # define PARALLELS_REQUIRED_BRIDGED_NETWORK  Bridged
 # define PARALLELS_BRIDGED_NETWORK_TYPE  bridged
 
+# define VZ_UUID_STRING_BUFLEN (VIR_UUID_STRING_BUFLEN + 2)
+
 struct _vzConn {
 virMutex lock;
 
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v3 1/5] vz: save session uuid on login

2015-08-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

This session uuid acts as authN token for different multihost vz operations one
of which is migration. Unfortunately we can't get it from server at any time
thus we need to save it at login.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_sdk.c   |   39 +--
 src/vz/vz_utils.h |2 +-
 2 files changed, 30 insertions(+), 11 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 744b58a..f7253de 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -37,6 +37,9 @@
 #define VIR_FROM_THIS VIR_FROM_PARALLELS
 #define JOB_INFINIT_WAIT_TIMEOUT UINT_MAX
 
+static int
+prlsdkUUIDParse(const char *uuidstr, unsigned char *uuid);
+
 VIR_LOG_INIT(parallels.sdk);
 
 /*
@@ -228,24 +231,40 @@ prlsdkDeinit(void)
 int
 prlsdkConnect(vzConnPtr privconn)
 {
-PRL_RESULT ret;
+int ret = -1;
+PRL_RESULT pret;
 PRL_HANDLE job = PRL_INVALID_HANDLE;
+PRL_HANDLE result = PRL_INVALID_HANDLE;
+PRL_HANDLE response = PRL_INVALID_HANDLE;
+char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
 
-ret = PrlSrv_Create(privconn-server);
-if (PRL_FAILED(ret)) {
-logPrlError(ret);
-return -1;
-}
+pret = PrlSrv_Create(privconn-server);
+prlsdkCheckRetGoto(pret, cleanup);
 
 job = PrlSrv_LoginLocalEx(privconn-server, NULL, 0,
   PSL_HIGH_SECURITY, PACF_NON_INTERACTIVE_MODE);
+if (PRL_FAILED(getJobResult(job, result)))
+goto cleanup;
 
-if (waitJob(job)) {
+pret = PrlResult_GetParam(result, response);
+prlsdkCheckRetGoto(pret, cleanup);
+
+pret = PrlLoginResponse_GetSessionUuid(response, session_uuid, buflen);
+prlsdkCheckRetGoto(pret, cleanup);
+
+if (prlsdkUUIDParse(session_uuid, privconn-session_uuid)  0)
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+if (ret  0)
 PrlHandle_Free(privconn-server);
-return -1;
-}
+PrlHandle_Free(result);
+PrlHandle_Free(response);
 
-return 0;
+return ret;
 }
 
 void
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index db09647..fe54b25 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -60,7 +60,7 @@ struct _vzConn {
 
 /* Immutable pointer, self-locking APIs */
 virDomainObjListPtr domains;
-
+unsigned char session_uuid[VIR_UUID_BUFLEN];
 PRL_HANDLE server;
 virStoragePoolObjList pools;
 virNetworkObjListPtr networks;
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v3 2/5] vz: add migration backbone code

2015-08-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

This patch makes basic vz migration possible. For example by virsh:
  virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system

$STUB could be anything as it is required virsh argument but it is not
used in direct migration.

Vz migration is implemented as direct migration. The reason is that vz sdk do
all the job. Prepare phase function is used to pass session uuid from
destination to source so we don't introduce new rpc call.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/libvirt-domain.c |3 +-
 src/vz/vz_driver.c   |  193 ++
 src/vz/vz_sdk.c  |   33 +
 src/vz/vz_sdk.h  |2 +
 4 files changed, 230 insertions(+), 1 deletions(-)

diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index cbf08fc..8577edd 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -3425,7 +3425,8 @@ virDomainMigrateDirect(virDomainPtr domain,
  NULLSTR(xmlin), flags, NULLSTR(dname), NULLSTR(uri),
  bandwidth);
 
-if (!domain-conn-driver-domainMigratePerform) {
+if (!domain-conn-driver-domainMigratePerform 
+!domain-conn-driver-domainMigratePerform3) {
 virReportUnsupportedError();
 return -1;
 }
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 8fa7957..f82fff8 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1343,6 +1343,196 @@ vzDomainMemoryStats(virDomainPtr domain,
 return ret;
 }
 
+static char*
+vzFormatCookie(const unsigned char *session_uuid)
+{
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+virBuffer buf = VIR_BUFFER_INITIALIZER;
+
+virBufferAddLit(buf, vz-migration1\n);
+virUUIDFormat(session_uuid, uuidstr);
+virBufferAsprintf(buf, session_uuid%s/session_uuid\n, uuidstr);
+virBufferAddLit(buf, /vz-migration1\n);
+
+if (virBufferCheckError(buf)  0)
+return NULL;
+
+return virBufferContentAndReset(buf);
+}
+
+static int
+vzParseCookie(const char *xml, unsigned char *session_uuid)
+{
+xmlDocPtr doc = NULL;
+xmlXPathContextPtr ctx = NULL;
+char *tmp = NULL;
+int ret = -1;
+
+if (!(doc = virXMLParseStringCtxt(xml, _((_migration_cookie)), ctx)))
+goto cleanup;
+
+if (!(tmp = virXPathString(string(./session_uuid[1]), ctx))) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   %s, _(missing session_uuid element in migration 
data));
+goto cleanup;
+}
+if (virUUIDParse(tmp, session_uuid)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   %s, _(malformed session_uuid element in migration 
data));
+goto cleanup;
+}
+ret = 0;
+
+ cleanup:
+xmlXPathFreeContext(ctx);
+xmlFreeDoc(doc);
+VIR_FREE(tmp);
+
+return ret;
+}
+
+static int
+vzDomainMigratePrepare3(virConnectPtr conn,
+const char *cookiein ATTRIBUTE_UNUSED,
+int cookieinlen ATTRIBUTE_UNUSED,
+char **cookieout,
+int *cookieoutlen,
+const char *uri_in ATTRIBUTE_UNUSED,
+char **uri_out ATTRIBUTE_UNUSED,
+unsigned long flags,
+const char *dname ATTRIBUTE_UNUSED,
+unsigned long resource ATTRIBUTE_UNUSED,
+const char *dom_xml ATTRIBUTE_UNUSED)
+{
+vzConnPtr privconn = conn-privateData;
+int ret = -1;
+
+virCheckFlags(0, -1);
+
+if (!(*cookieout = vzFormatCookie(privconn-session_uuid)))
+goto cleanup;
+*cookieoutlen = strlen(*cookieout) + 1;
+ret = 0;
+
+ cleanup:
+if (ret != 0) {
+VIR_FREE(*cookieout);
+*cookieoutlen = 0;
+}
+
+return ret;
+}
+
+static int
+vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature)
+{
+switch (feature) {
+case VIR_DRV_FEATURE_MIGRATION_V3:
+case VIR_DRV_FEATURE_MIGRATION_DIRECT:
+return 1;
+default:
+return 0;
+}
+}
+
+static virURIPtr
+vzMakeVzUri(const char *connuri_str)
+{
+virURIPtr connuri = NULL;
+virURIPtr vzuri = NULL;
+int ret = -1;
+
+if (!(connuri = virURIParse(connuri_str)))
+goto cleanup;
+
+if (VIR_ALLOC(vzuri)  0)
+goto cleanup;
+memset(vzuri, 0, sizeof(*vzuri));
+
+if (VIR_STRDUP(vzuri-server, connuri-server)  0)
+goto cleanup;
+vzuri-port = connuri-port;
+ret = 0;
+
+ cleanup:
+
+virURIFree(connuri);
+if (ret  0) {
+virURIFree(vzuri);
+vzuri = NULL;
+}
+
+return vzuri;
+}
+
+#define VZ_MIGRATION_FLAGS (0)
+
+#define VZ_MIGRATION_PARAMETERS (NULL)
+
+static int
+vzDomainMigratePerform3(virDomainPtr domain,
+const char *xmlin ATTRIBUTE_UNUSED,
+const char *cookiein ATTRIBUTE_UNUSED,
+int 

[libvirt] [PATCH v3 4/5] vz: support misc migration options

2015-08-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Migration API has a lot of options. This patch intention is to provide
support for those options that can be trivially supported and give
estimation for other options support in this commit message.

I. Supported.

1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain
memory'. It is supported but quite uncommon way: vz migration demands that this
option should be set. This is due to vz is hardcoded to moving VMs memory using
compression. So anyone who wants to migrate vz domain should set this option
thus declaring it knows it uses compression.

Why bother? May be just support this option and ignore if it is not set or
don't support at all as we can't change behaviour in this aspect.  Well I
believe that this option is, first, inherent to hypervisor implementation as
we have a task of moving domain memory to different place and, second, we have
a tradeoff here between cpu and network resources and some managment should
choose the stratery via this option. If we choose ignoring or unsupporting
implementation than this option has a too vague meaning. Let's go into more
detail.

First if we ignore situation where option is not set than we put user into
fallacy that vz hypervisor don't use compression and thus have lower cpu
usage. Second approach is to not support the option. The main reason not to
follow this way is that 'not supported and not set' is indistinguishable from
'supported and not set' and thus again fool the user.

2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately
as possible' which technically means 'migrate as much domain memory as possible
before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both
vz VMs and CTs are always migrated via live scheme.

One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka
live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these
flags and always use live migration.

3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes
together. Vz domain are alwasy persistent so we have to support demand option
VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and
this is done just by unsupporting it).

4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is
trivially supported as we have a corresponding option in vz migration.

5. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'. It is
a forcing option that is it is ignored if domain is running and must be set
to migrate stopped domain. Vz implemenation follows this unformal definition
with one exception: non-shared disks will be migrated too. This desicion is on
par with VIR_MIGRATE_NON_SHARED_DISK condideration(see last part of this
notes).

All that said the minimal command to migrate vz domain looks next:

migrate --direct $DOMAIN $STUB --migrateuri $DESTINATION --live --persistent 
--compressed.

Not good. Say if you want to just migrate a domain without further
details you will get error messages until you add these options to
command line. I think there is a lack of notion 'default' behaviour
in all these aspects. If we have it we could just issue:

migrate $DOMAIN $DESTINATION

For vz this would give default compression for example, for qemu - default
no-compression. Then we could have flags --compressed and -no-compressed
and for vz the latter would give unsupported error.

II. Unsupported.

1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this
is not reflected in current version of vz driver and will be fixed
soon). So we need not to support this option.

2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate
support from vz sdk. Although we have locks they are advisory and
cant help us.

3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection
to pass hypervisor migration traffic'. Unsupported as not among
vz hypervisor usecases.

4. p2p migration which is exposed via *toURI* interface with
VIR_MIGRATE_PEER2PEER flag set. It doesn't make sense
for vz migration as it is a variant of managed migration which
is qemu specific.

5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE,
VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC,
VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH,
VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS,
VIR_MIGRATE_PARAM_MIGRATE_DISKS.
Without further discussion. They are just not usecases of vz hypevisor.

III. Undecided and thus unsupported.

6. VIR_MIGRATE_NON_SHARED_DISK. The meaning of this option is not clear to me.
Look, if qemu domain has a non-shared disk than it will refuse to migrate. But
after you specify this option it will refuse too. You need to create image file
for the disk on the destination side. Only after that you can migrate.
Unexpectedly existence of this file is enough to migrate without option too. In
this case you will get a domain on the destination with 

[libvirt] [PATCH v3 0/5] vz: add migration support

2015-08-25 Thread nshirokovskiy
NOTE that minimal command to migrate vz domain is like next:

virsh -c vz:///system migrate --direct 200 shiny0 --migrateuri 
vz+ssh://shiny0/system 
  --live --persistent --compressed

==Difference from v1:

1. Patch is quite different. First patchset implements migration thru managed
migration scheme. This one goes thru p2p scheme. I belive this is a better
approach. Vz migration is done via vz sdk and first patchset uses 5 phased
migration only to get a token from destination on prepare phase which is kind a
misuse. This patch just adds vz specific function to driver interface
to archive the same goal.

2. Offline migration is supported as there is no more dependency on current
flow of managed migration scheme.

==Difference from v2: 

1. Implement thru direct migration instead of p2p. p2p is just managed
5-staged migration when managing is done on daemon side. Vz migration stages
are all hidden in vz sdk and thus it would be more consistent to use direct
scheme.

2. Use existing driver function for prepare migration phase to pass session
uuid from destination to source instead of new one. As vz migration is direct
one we will not use prepare phase function in a straight forward manner
anyway.

 src/libvirt-domain.c |3 +-
 src/vz/vz_driver.c   |  315 ++
 src/vz/vz_sdk.c  |   86 ---
 src/vz/vz_sdk.h  |6 +
 src/vz/vz_utils.h|4 +-
 5 files changed, 398 insertions(+), 16 deletions(-)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 3/7] vz: add migration backbone code

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

This patch makes basic vz migration possible. For example by virsh:
virsh -c vz:///system migrate $NAME vz+ssh://$DST/system

Vz migration is implemented as peer2peer migration. The reason
is that vz sdk do all the job. The question may arise then
why don't implement it as a direct migration. The reason
is that we want to leverage rich libvirt authentication abilities
we lack in vz sdk. We can do it because vz sdk can use tokens to
factor out authentication from migration command.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |  104 
 src/vz/vz_sdk.c|   33 
 src/vz/vz_sdk.h|2 +
 3 files changed, 139 insertions(+), 0 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 9d23322..d7b93fb 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1352,6 +1352,108 @@ vzConnectVzGetSessionUUID(virConnectPtr conn, unsigned 
char* sessuuid)
 return 0;
 }
 
+static int
+vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature)
+{
+switch (feature) {
+case VIR_DRV_FEATURE_MIGRATION_PARAMS:
+case VIR_DRV_FEATURE_MIGRATION_P2P:
+return 1;
+default:
+return 0;
+}
+}
+
+static virURIPtr
+vzMakeVzUri(const char *connuri_str)
+{
+virURIPtr connuri = NULL;
+virURIPtr vzuri = NULL;
+int ret = -1;
+
+if (!(connuri = virURIParse(connuri_str)))
+goto cleanup;
+
+if (VIR_ALLOC(vzuri)  0)
+goto cleanup;
+memset(vzuri, 0, sizeof(*vzuri));
+
+vzuri-server = connuri-server;
+vzuri-port = connuri-port;
+ret = 0;
+
+ cleanup:
+
+virURIFree(connuri);
+if (ret  0) {
+virURIFree(vzuri);
+vzuri = NULL;
+}
+
+return vzuri;
+}
+
+#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PEER2PEER)
+
+#define VZ_MIGRATION_PARAMETERS (NULL)
+
+static int
+vzDomainMigratePerform3Params(virDomainPtr domain,
+  const char *dconnuri,
+  virTypedParameterPtr params,
+  int nparams,
+  const char *cookiein ATTRIBUTE_UNUSED,
+  int cookieinlen ATTRIBUTE_UNUSED,
+  char **cookieout ATTRIBUTE_UNUSED,
+  int *cookieoutlen ATTRIBUTE_UNUSED,
+  unsigned int flags)
+{
+int ret = -1;
+virDomainObjPtr dom = NULL;
+virConnectPtr dconn = NULL;
+virURIPtr vzuri = NULL;
+unsigned char session_uuid[VIR_UUID_BUFLEN];
+vzConnPtr privconn = domain-conn-privateData;
+
+virCheckFlags(flags, -1);
+
+if (virTypedParamsValidate(params, nparams, VZ_MIGRATION_PARAMETERS)  0)
+goto cleanup;
+
+if (!(vzuri = vzMakeVzUri(dconnuri)))
+goto cleanup;
+
+if (!(dom = vzDomObjFromDomain(domain)))
+goto cleanup;
+
+dconn = virConnectOpen(dconnuri);
+if (dconn == NULL) {
+virReportError(VIR_ERR_OPERATION_FAILED,
+   _(Failed to connect to remote libvirt URI %s: %s),
+   dconnuri, virGetLastErrorMessage());
+goto cleanup;
+}
+
+if (virConnectVzGetSessionUUID(dconn, session_uuid)  0)
+goto cleanup;
+
+if (prlsdkMigrate(dom, vzuri, session_uuid)  0)
+goto cleanup;
+
+virDomainObjListRemove(privconn-domains, dom);
+dom = NULL;
+
+ret = 0;
+
+ cleanup:
+if (dom)
+virObjectUnlock(dom);
+virObjectUnref(dconn);
+virURIFree(vzuri);
+
+return ret;
+}
+
 static virHypervisorDriver vzDriver = {
 .name = vz,
 .connectOpen = vzConnectOpen,/* 0.10.0 */
@@ -1406,6 +1508,8 @@ static virHypervisorDriver vzDriver = {
 .domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */
 .domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */
 .connectVzGetSessionUUID = vzConnectVzGetSessionUUID, /* 1.2.18 */
+.connectSupportsFeature = vzConnectSupportsFeature, /* 1.2.18 */
+.domainMigratePerform3Params = vzDomainMigratePerform3Params, /* 1.2.18 */
 };
 
 static virConnectDriver vzConnectDriver = {
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index f7253de..783438d 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -4054,3 +4054,36 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
 
 return ret;
 }
+
+/* high security is default choice for 2 reasons:
+   1. as this is the highest set security we can't get
+   reject from server with high security settings
+   2. this is on par with security level of driver
+   connection to dispatcher */
+
+#define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
+
+int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
+  const unsigned char *session_uuid)
+{
+int ret = -1;
+vzDomObjPtr privdom = dom-privateData;
+PRL_HANDLE job = PRL_INVALID_HANDLE;
+char 

[libvirt] [PATCH v2 5/7] vz: support migration uri

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |   58 ---
 1 files changed, 54 insertions(+), 4 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 8087165..c6086a7 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1393,10 +1393,53 @@ vzMakeVzUri(const char *connuri_str)
 return vzuri;
 }
 
+virURIPtr
+vzParseVzURI(const char *uri_str)
+{
+virURIPtr uri = NULL;
+int ret = -1;
+
+if (!(uri = virURIParse(uri_str)))
+goto cleanup;
+
+if (uri-scheme == NULL || uri-server == NULL) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _(scheme and host are mandatory vz migration URI: %s),
+   uri_str);
+goto cleanup;
+}
+
+if (uri-user != NULL || uri-path != NULL ||
+uri-query != NULL || uri-fragment != NULL) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _(only scheme, host and port are supported in 
+ vz migration URI: %s), uri_str);
+goto cleanup;
+}
+
+if (STRNEQ(uri-scheme, tcp)) {
+virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
+   _(unsupported scheme %s in migration URI %s),
+   uri-scheme, uri_str);
+goto cleanup;
+}
+
+ret = 0;
+
+ cleanup:
+if (ret  0) {
+virURIFree(uri);
+uri = NULL;
+}
+
+return uri;
+}
+
 #define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PEER2PEER)
 
 #define VZ_MIGRATION_PARAMETERS \
 VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+VIR_MIGRATE_PARAM_URI,  VIR_TYPED_PARAM_STRING, \
 NULL
 
 static int
@@ -1417,18 +1460,25 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 unsigned char session_uuid[VIR_UUID_BUFLEN];
 vzConnPtr privconn = domain-conn-privateData;
 const char *dname = NULL;
+const char *miguri = NULL;
 
 virCheckFlags(flags, -1);
 
 if (virTypedParamsValidate(params, nparams, VZ_MIGRATION_PARAMETERS)  0)
 goto cleanup;
 
-if (!(vzuri = vzMakeVzUri(dconnuri)))
-goto cleanup;
-
 if (virTypedParamsGetString(params, nparams,
 VIR_MIGRATE_PARAM_DEST_NAME,
-dname)  0)
+dname)  0 ||
+virTypedParamsGetString(params, nparams,
+VIR_MIGRATE_PARAM_URI, miguri)  0)
+goto cleanup;
+
+if (miguri == NULL)
+vzuri = vzMakeVzUri(dconnuri);
+else
+vzuri = vzParseVzURI(miguri);
+if (vzuri == NULL)
 goto cleanup;
 
 if (!(dom = vzDomObjFromDomain(domain)))
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 2/7] vz: provide a way to get remote session uuid

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

We need the session uuid of a connection to destination host on source host to
perform migration. This patch do all the job to pass this uuid from destination
node to source.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 daemon/remote.c  |   30 ++
 docs/apibuild.py |1 +
 docs/hvsupport.pl|1 +
 src/driver-hypervisor.h  |4 
 src/libvirt-domain.c |   30 ++
 src/libvirt_internal.h   |2 ++
 src/libvirt_private.syms |1 +
 src/remote/remote_driver.c   |   26 ++
 src/remote/remote_protocol.x |   12 +++-
 src/remote_protocol-structs  |1 +
 src/vz/vz_driver.c   |   10 ++
 11 files changed, 117 insertions(+), 1 deletions(-)

diff --git a/daemon/remote.c b/daemon/remote.c
index e9e2dca..b57c3b5 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -,6 +,36 @@ remoteDispatchDomainInterfaceAddresses(virNetServerPtr 
server ATTRIBUTE_UNUSED,
 return rv;
 }
 
+static int
+remoteDispatchConnectVzGetSessionUUID(
+virNetServerPtr server ATTRIBUTE_UNUSED,
+virNetServerClientPtr client,
+virNetMessagePtr msg ATTRIBUTE_UNUSED,
+virNetMessageErrorPtr rerr,
+remote_connect_vz_get_session_uuid_ret *ret)
+{
+int rv = -1;
+struct daemonClientPrivate *priv =
+virNetServerClientGetPrivateData(client);
+
+if (!priv-conn) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s, _(connection not open));
+goto cleanup;
+}
+
+/* yes, casting. but why remote_uuid type is signed? */
+if (virConnectVzGetSessionUUID(priv-conn, (unsigned char*)ret-uuid)  0)
+goto cleanup;
+
+rv = 0;
+
+ cleanup:
+if (rv  0)
+virNetMessageSaveError(rerr);
+
+return rv;
+}
+
 
 /*- Helpers. -*/
 
diff --git a/docs/apibuild.py b/docs/apibuild.py
index f934fb2..a9d2811 100755
--- a/docs/apibuild.py
+++ b/docs/apibuild.py
@@ -102,6 +102,7 @@ ignored_functions = {
   virDomainMigratePrepare3Params: private function for migration,
   virDomainMigrateConfirm3Params: private function for migration,
   virDomainMigratePrepareTunnel3Params: private function for tunnelled 
migration,
+  virConnectVzGetSessionUUID: private function for vz migration,
   virErrorCopyNew: private,
 }
 
diff --git a/docs/hvsupport.pl b/docs/hvsupport.pl
index 44a30ce..9a284cd 100755
--- a/docs/hvsupport.pl
+++ b/docs/hvsupport.pl
@@ -199,6 +199,7 @@ $apis{virDomainMigratePrepareTunnel3Params}-{vers} = 
1.1.0;
 $apis{virDomainMigratePerform3Params}-{vers} = 1.1.0;
 $apis{virDomainMigrateFinish3Params}-{vers} = 1.1.0;
 $apis{virDomainMigrateConfirm3Params}-{vers} = 1.1.0;
+$apis{virConnectVzGetSessionUUID}-{vers} = 1.2.18;
 
 
 
diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h
index 3275343..d7604ec 100644
--- a/src/driver-hypervisor.h
+++ b/src/driver-hypervisor.h
@@ -1207,6 +1207,9 @@ typedef int
const char *password,
unsigned int flags);
 
+typedef int
+(*virDrvConnectVzGetSessionUUID)(virConnectPtr conn, unsigned char* 
session_uuid);
+
 typedef struct _virHypervisorDriver virHypervisorDriver;
 typedef virHypervisorDriver *virHypervisorDriverPtr;
 
@@ -1437,6 +1440,7 @@ struct _virHypervisorDriver {
 virDrvDomainGetFSInfo domainGetFSInfo;
 virDrvDomainInterfaceAddresses domainInterfaceAddresses;
 virDrvDomainSetUserPassword domainSetUserPassword;
+virDrvConnectVzGetSessionUUID connectVzGetSessionUUID;
 };
 
 
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 837933f..c398ce2 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -6475,6 +6475,36 @@ virDomainDefineXMLFlags(virConnectPtr conn, const char 
*xml, unsigned int flags)
 }
 
 
+/*
+ * Not for public use.  This function is part of the internal
+ * implementation of vz migration
+ */
+int
+virConnectVzGetSessionUUID(virConnectPtr conn, unsigned char* session_uuid)
+{
+VIR_DEBUG(conn=%p, conn);
+int ret = -1;
+
+virResetLastError();
+
+virCheckConnectReturn(conn, ret);
+virCheckNonNullArgGoto(session_uuid, cleanup);
+
+if (!conn-driver-connectVzGetSessionUUID) {
+virReportUnsupportedError();
+goto cleanup;
+}
+
+ret = conn-driver-connectVzGetSessionUUID(conn, session_uuid);
+
+ cleanup:
+if (ret  0)
+virDispatchError(conn);
+
+return ret;
+}
+
+
 /**
  * virDomainUndefine:
  * @domain: pointer to a defined domain
diff --git a/src/libvirt_internal.h b/src/libvirt_internal.h
index 1313b58..340c1bb 100644
--- a/src/libvirt_internal.h
+++ b/src/libvirt_internal.h
@@ -289,4 +289,6 @@ virTypedParameterValidateSet(virConnectPtr conn,
  virTypedParameterPtr params,
  int nparams);
 
+int virConnectVzGetSessionUUID(virConnectPtr conn, unsigned 

[libvirt] [PATCH v2 6/7] vz: support misc migration options

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Migration API has a lot of options. This patch intention is to provide
support for those options that can be trivially supported and give
estimation for other options support in this commit message.

I. Supported.

1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain
memory'. It is supported but quite uncommon way: vz migration demands that this
option should be set. This is due to vz is hardcoded to moving VMs memory using
compression. So anyone who wants to migrate vz domain should set this option
thus declaring it knows it uses compression.

Why bother? May be just support this option and ignore if it is not set or
don't support at all as we can't change behaviour in this aspect.  Well I
believe that this option is, first, inherent to hypervisor implementation as
we have a task of moving domain memory to different place and, second, we have
a tradeoff here between cpu and network resources and some managment should
choose the stratery via this option. If we choose ignoring or unsupporting
implementation than this option has a too vague meaning. Let's go into more
detail.

First if we ignore situation where option is not set than we put user into
fallacy that vz hypervisor don't use compression and thus have lower cpu
usage. Second approach is to not support the option. The main reason not to
follow this way is that 'not supported and not set' is indistinguishable from
'supported and not set' and thus again fool the user.

2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately
as possible' which technically means 'migrate as much domain memory as possible
before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both
vz VMs and CTs are always migrated via live scheme.

One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka
live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these
flags and always use live migration.

3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes
together. Vz domain are alwasy persistent so we have to support demand option
VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and
this is done just by unsupporting it).

4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is
trivially supported as we have a corresponding option in vz migration.

5. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'. It is
a forcing option that is it is ignored if domain is running and must be set
to migrate stopped domain. Vz implemenation follows this unformal definition
with one exception: non-shared disks will be migrated too. This desicion is on
par with VIR_MIGRATE_NON_SHARED_DISK condideration(see last part of this
notes).

All that said the minimal command to migrate vz domain looks next:

migrate $DOMAIN $DESTINATION --live --persistent --compressed.

Not good. Say if you want to just migrate a domain without further
details you will get error messages until you add these options to
command line. I think there is a lack of notion 'default' behaviour
in all these aspects. If we have it we could just issue:

migrate $DOMAIN $DESTINATION

For vz this would give default compression for example, for qemu - default
no-compression. Then we could have flags --compressed and -no-compressed
and for vz the latter would give unsupported error.

II. Unsupported.

1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this
is not reflected in current version of vz driver and will be fixed
soon). So we need not to support this option.

2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate
support from vz sdk. Although we have locks they are advisory and
cant help us.

3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection
to pass hypervisor migration traffic'. Unsupported as not among
vz hypervisor usecases.

4. Direct migration. Which is exposed via *toURI* interface with
VIR_MIGRATE_PEER2PEER flag unset. Means 'migrate without using
libvirtd on the other side'. To support it we should add authN
means to vz driver as mentioned in 'backbone patch' which looks
ugly.

5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE,
VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC,
VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH,
VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS,
VIR_MIGRATE_PARAM_MIGRATE_DISKS.
Without further discussion. They are just not usecases of vz hypevisor.

III. Undecided and thus unsupported.

6. VIR_MIGRATE_NON_SHARED_DISK. The meaning of this option is not clear to me.
Look, if qemu domain has a non-shared disk than it will refuse to migrate. But
after you specify this option it will refuse too. You need to create image file
for the disk on the destination side. Only after that you can migrate.
Unexpectedly existence of this file is enough to migrate without option too. In
this case you will 

[libvirt] [PATCH v2 0/7] vz: add migration support

2015-07-17 Thread nshirokovskiy
NOTE that minimal command to migrate vz domain is like next:

virsh -c vz:///system migrate 200 vz+ssh://shiny0/system -p2p --live 
--persistent
--compressed

Difference from v1:

1. Patch is quite different. First patchset implements migration thru managed
migration scheme. This one goes thru p2p scheme. I belive this is a better
approach. Vz migration is done via vz sdk and first patchset uses 5 phased
migration only to get a token from destination on prepare phase which is kind a
misuse. This patch just adds vz specific function to driver interface
to archive the same goal.

2. Offline migration is supported as there is no more dependency on current
flow of managed migration scheme.

 daemon/remote.c  |   30 +
 docs/apibuild.py |1 +
 docs/hvsupport.pl|1 +
 src/driver-hypervisor.h  |4 +
 src/libvirt-domain.c |   30 +
 src/libvirt_internal.h   |2 +
 src/libvirt_private.syms |1 +
 src/remote/remote_driver.c   |   26 +
 src/remote/remote_protocol.x |   12 ++-
 src/remote_protocol-structs  |1 +
 src/vz/vz_driver.c   |  256 ++
 src/vz/vz_sdk.c  |   86 ---
 src/vz/vz_sdk.h  |6 +
 src/vz/vz_utils.h|4 +-
 14 files changed, 444 insertions(+), 16 deletions(-)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 7/7] vz: cleanup: define vz format of uuids

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

vz puts uuids into curly braces. Simply introduce new contstant to reflect this
and get rid of magic +2 in code.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_sdk.c   |   12 ++--
 src/vz/vz_utils.h |2 ++
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 9a2b5df..bafc6e4 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -236,7 +236,7 @@ prlsdkConnect(vzConnPtr privconn)
 PRL_HANDLE job = PRL_INVALID_HANDLE;
 PRL_HANDLE result = PRL_INVALID_HANDLE;
 PRL_HANDLE response = PRL_INVALID_HANDLE;
-char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+char session_uuid[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
 
 pret = PrlSrv_Create(privconn-server);
@@ -316,7 +316,7 @@ prlsdkUUIDFormat(const unsigned char *uuid, char *uuidstr)
 static PRL_HANDLE
 prlsdkSdkDomainLookupByUUID(vzConnPtr privconn, const unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_HANDLE sdkdom = PRL_INVALID_HANDLE;
 
 prlsdkUUIDFormat(uuid, uuidstr);
@@ -365,7 +365,7 @@ prlsdkGetDomainIds(PRL_HANDLE sdkdom,
char **name,
unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 len;
 PRL_RESULT pret;
 
@@ -1722,7 +1722,7 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR 
opaque)
 vzConnPtr privconn = opaque;
 PRL_RESULT pret = PRL_ERR_FAILURE;
 PRL_HANDLE_TYPE handleType;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 unsigned char uuid[VIR_UUID_BUFLEN];
 PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr);
 PRL_EVENT_TYPE prlEventType;
@@ -3480,7 +3480,7 @@ prlsdkDoApplyConfig(virConnectPtr conn,
 {
 PRL_RESULT pret;
 size_t i;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 bool needBoot = true;
 char *mask = NULL;
 
@@ -4070,7 +4070,7 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
 int ret = -1;
 vzDomObjPtr privdom = dom-privateData;
 PRL_HANDLE job = PRL_INVALID_HANDLE;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS;
 
 if (flags  VIR_MIGRATE_PAUSED)
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index fe54b25..2a59426 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -55,6 +55,8 @@
 # define PARALLELS_REQUIRED_BRIDGED_NETWORK  Bridged
 # define PARALLELS_BRIDGED_NETWORK_TYPE  bridged
 
+# define VZ_UUID_STRING_BUFLEN (VIR_UUID_STRING_BUFLEN + 2)
+
 struct _vzConn {
 virMutex lock;
 
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 1/7] vz: save session uuid on login

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

This session uuid acts as authN token for different multihost vz operations one
of which is migration. Unfortunately we can't get it from server at any time
thus we need to save it at login.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_sdk.c   |   39 +--
 src/vz/vz_utils.h |2 +-
 2 files changed, 30 insertions(+), 11 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 744b58a..f7253de 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -37,6 +37,9 @@
 #define VIR_FROM_THIS VIR_FROM_PARALLELS
 #define JOB_INFINIT_WAIT_TIMEOUT UINT_MAX
 
+static int
+prlsdkUUIDParse(const char *uuidstr, unsigned char *uuid);
+
 VIR_LOG_INIT(parallels.sdk);
 
 /*
@@ -228,24 +231,40 @@ prlsdkDeinit(void)
 int
 prlsdkConnect(vzConnPtr privconn)
 {
-PRL_RESULT ret;
+int ret = -1;
+PRL_RESULT pret;
 PRL_HANDLE job = PRL_INVALID_HANDLE;
+PRL_HANDLE result = PRL_INVALID_HANDLE;
+PRL_HANDLE response = PRL_INVALID_HANDLE;
+char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
 
-ret = PrlSrv_Create(privconn-server);
-if (PRL_FAILED(ret)) {
-logPrlError(ret);
-return -1;
-}
+pret = PrlSrv_Create(privconn-server);
+prlsdkCheckRetGoto(pret, cleanup);
 
 job = PrlSrv_LoginLocalEx(privconn-server, NULL, 0,
   PSL_HIGH_SECURITY, PACF_NON_INTERACTIVE_MODE);
+if (PRL_FAILED(getJobResult(job, result)))
+goto cleanup;
 
-if (waitJob(job)) {
+pret = PrlResult_GetParam(result, response);
+prlsdkCheckRetGoto(pret, cleanup);
+
+pret = PrlLoginResponse_GetSessionUuid(response, session_uuid, buflen);
+prlsdkCheckRetGoto(pret, cleanup);
+
+if (prlsdkUUIDParse(session_uuid, privconn-session_uuid)  0)
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+if (ret  0)
 PrlHandle_Free(privconn-server);
-return -1;
-}
+PrlHandle_Free(result);
+PrlHandle_Free(response);
 
-return 0;
+return ret;
 }
 
 void
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index db09647..fe54b25 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -60,7 +60,7 @@ struct _vzConn {
 
 /* Immutable pointer, self-locking APIs */
 virDomainObjListPtr domains;
-
+unsigned char session_uuid[VIR_UUID_BUFLEN];
 PRL_HANDLE server;
 virStoragePoolObjList pools;
 virNetworkObjListPtr networks;
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 4/7] vz: support domain rename on migrate

2015-07-17 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |   12 ++--
 src/vz/vz_sdk.c|   16 +---
 src/vz/vz_sdk.h|5 -
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index d7b93fb..8087165 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1395,7 +1395,9 @@ vzMakeVzUri(const char *connuri_str)
 
 #define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PEER2PEER)
 
-#define VZ_MIGRATION_PARAMETERS (NULL)
+#define VZ_MIGRATION_PARAMETERS \
+VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+NULL
 
 static int
 vzDomainMigratePerform3Params(virDomainPtr domain,
@@ -1414,6 +1416,7 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 virURIPtr vzuri = NULL;
 unsigned char session_uuid[VIR_UUID_BUFLEN];
 vzConnPtr privconn = domain-conn-privateData;
+const char *dname = NULL;
 
 virCheckFlags(flags, -1);
 
@@ -1423,6 +1426,11 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 if (!(vzuri = vzMakeVzUri(dconnuri)))
 goto cleanup;
 
+if (virTypedParamsGetString(params, nparams,
+VIR_MIGRATE_PARAM_DEST_NAME,
+dname)  0)
+goto cleanup;
+
 if (!(dom = vzDomObjFromDomain(domain)))
 goto cleanup;
 
@@ -1437,7 +1445,7 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 if (virConnectVzGetSessionUUID(dconn, session_uuid)  0)
 goto cleanup;
 
-if (prlsdkMigrate(dom, vzuri, session_uuid)  0)
+if (prlsdkMigrate(dom, vzuri, session_uuid, dname)  0)
 goto cleanup;
 
 virDomainObjListRemove(privconn-domains, dom);
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 783438d..89a2429 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -4064,7 +4064,8 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
 #define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
 
 int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
-  const unsigned char *session_uuid)
+  const unsigned char *session_uuid,
+  const char *dname)
 {
 int ret = -1;
 vzDomObjPtr privdom = dom-privateData;
@@ -4072,12 +4073,13 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
 char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
 
 prlsdkUUIDFormat(session_uuid, uuidstr);
-job = PrlVm_MigrateEx(privdom-sdkdom, uri-server, uri-port, uuidstr,
-  , /* use default dir for migrated instance bundle 
*/
-  PRLSDK_MIGRATION_FLAGS,
-  0, /* reserved flags */
-  PRL_TRUE /* don't ask for confirmations */
-  );
+job = PrlVm_MigrateWithRenameEx(privdom-sdkdom, uri-server, uri-port, 
uuidstr,
+dname == NULL ?  : dname,
+, /* use default dir for migrated 
instance bundle */
+PRLSDK_MIGRATION_FLAGS,
+0, /* reserved flags */
+PRL_TRUE /* don't ask for confirmations */
+);
 
 if (PRL_FAILED(waitJob(job)))
 goto cleanup;
diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h
index d3f0caf..0aa70b3 100644
--- a/src/vz/vz_sdk.h
+++ b/src/vz/vz_sdk.h
@@ -77,4 +77,7 @@ prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned 
long long *time);
 int
 prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, 
unsigned int nr_stats);
 int
-prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned 
*session_uuid);
+prlsdkMigrate(virDomainObjPtr dom,
+  virURIPtr uri,
+  const char unsigned *session_uuid,
+  const char *dname);
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 4/6] vz: support migration uri

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |   52 +++-
 1 files changed, 51 insertions(+), 1 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index a42597c..9fefac1 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1356,6 +1356,7 @@ vzConnectSupportsFeature(virConnectPtr conn 
ATTRIBUTE_UNUSED, int feature)
 
 #define VZ_MIGRATION_PARAMETERS \
 VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+VIR_MIGRATE_PARAM_URI,  VIR_TYPED_PARAM_STRING, \
 NULL
 
 static char *
@@ -1509,6 +1510,45 @@ vzParseCookie2(const char *xml, unsigned char 
*domain_uuid)
 return ret;
 }
 
+/* return copy of 'in' and check it is correct */
+static char *
+vzAdaptInUri(const char *in)
+{
+virURIPtr uri = NULL;
+char *out = NULL;
+
+uri = virURIParse(in);
+
+if (uri-scheme == NULL || uri-server == NULL) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _(scheme and host are mandatory vz migration URI: %s),
+   in);
+goto cleanup;
+}
+
+if (uri-user != NULL || uri-path != NULL ||
+uri-query != NULL || uri-fragment != NULL) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _(only scheme, host and port are supported in 
+ vz migration URI: %s), in);
+goto cleanup;
+}
+
+if (STRNEQ(uri-scheme, tcp)) {
+virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
+   _(unsupported scheme %s in migration URI %s),
+   uri-scheme, in);
+goto cleanup;
+}
+
+if (VIR_STRDUP(out, in)  0)
+goto cleanup;
+
+ cleanup:
+virURIFree(uri);
+return out;
+}
+
 static int
 vzDomainMigratePrepare3Params(virConnectPtr dconn,
   virTypedParameterPtr params ATTRIBUTE_UNUSED,
@@ -1522,6 +1562,11 @@ vzDomainMigratePrepare3Params(virConnectPtr dconn,
 {
 vzConnPtr privconn = dconn-privateData;
 int ret = -1;
+const char *uri = NULL;
+
+if (virTypedParamsGetString(params, nparams,
+VIR_MIGRATE_PARAM_URI, uri)  0)
+goto cleanup;
 
 *cookieout = NULL;
 *uri_out = NULL;
@@ -1530,7 +1575,12 @@ vzDomainMigratePrepare3Params(virConnectPtr dconn,
 goto cleanup;
 *cookieoutlen = strlen(*cookieout) + 1;
 
-if (!(*uri_out = vzCreateMigrateUri()))
+if (uri == NULL)
+*uri_out = vzCreateMigrateUri();
+else
+*uri_out = vzAdaptInUri(uri);
+
+if (*uri_out == NULL)
 goto cleanup;
 
 ret = 0;
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 5/6] vz: support misc migration options

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Migration API has a lot of options. This patch intention is to provide
support for those options that can be trivially supported and give
estimation for other options support in this commit message.

I. Supported.

1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain
memory'. It is supported but quite uncommon way: vz migration demands that this
option should be set. This is due to vz is hardcoded to moving VMs memory using
compression. So anyone who wants to migrate vz domain should set this option
thus declaring it knows it uses compression.

Why bother? May be just support this option and ignore if it is not set or
don't support at all as we can't change behaviour in this aspect.  Well I
believe that this option is, first, inherent to hypervisor implementation as
we have a task of moving domain memory to different place and, second, we have
a tradeoff here between cpu and network resources and some managment should
choose the stratery via this option. If we choose ignoring or unsupporting
implementation than this option has a too vague meaning. Let's go into more
detail.

First if we ignore situation where option is not set than we put user into
fallacy that vz hypervisor don't use compression and thus have lower cpu
usage. Second approach is to not support the option. The main reason not to
follow this way is that 'not supported and not set' is indistinguishable from
'supported and not set' and thus again fool the user.

2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately
as possible' which technically means 'migrate as much domain memory as possible
before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both
vz VMs and CTs are always migrated via live scheme.

One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka
live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these
flags and always use live migration.

3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes
together. Vz domain are alwasy persistent so we have to support demand option
VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and
this is done just by unsupporting it).

4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is
trivially supported as we have a corresponding option in vz migration.

All that said the minimal command to migrate vz domain looks next:

migrate $DOMAIN $DESTINATION --live --persistent --compressed.

Not good. Say if you want to just migrate a domain without further
details you will get error messages until you add these options to
command line. I think there is a lack of notion 'default' behaviour
in all these aspects. If we have it we could just issue:

migrate $DOMAIN $DESTINATION

For vz this would give default compression for example, for qemu - default
no-compression. Then we could have flags --compressed and -no-compressed
and for vz the latter would give unsupported error.

II. Unsupported.

1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this
is not reflected in current version of vz driver and will be fixed
soon). So we need not to support this option.

2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate
support from vz sdk. Although we have locks they are advisory and
cant help us.

3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection
to pass hypervisor migration traffic'. Unsupported as not among
vz hypervisor usecases. Moreover this feature only has meaning
for peer2peer migration that is not implemented in this patch set.

4. Direct migration. Which is exposed via *toURI* interface with
VIR_MIGRATE_PEER2PEER flag unset. Means 'migrate without using
libvirtd on the other side'. To support it we should add authN
means to vz driver as mentioned in 'backbone patch' which looks
ugly.

5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE,
VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC,
VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH,
VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS,
VIR_MIGRATE_PARAM_MIGRATE_DISKS.
Without further discussion. They are just not usecases of vz hypevisor.

III. Unimplemented.

1. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'.
Actually the same vz sdk call supports offline migration but nevertheless we
don't get it for free for vz domain because in case of offline migration only
steps 'begin' and 'prepare' are performed while we can't issue vz migration
command ealier than 'perform' step as we need authN cookie. So we need
extra work to be done which goes to different patchset.

2. VIR_MIGRATE_PEER2PEER. Means 'whole migration managment should
be done by the daemon of the source side'. QEMU does this but
at the cost of heavily(at my estimate) duplicate client side migration
managment code. We can do these too for vz or even better - refactor
and than 

[libvirt] [PATCH 1/6] vz: add migration backbone code

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

This patch makes basic vz migration possible. For example by virsh:
virsh -c vz:///system migrate $NAME vz+ssh://$DST/system

Vz migration is implemented thru interface for managed migrations for drivers
although it looks like a candadate for direct migration as all work is done by
vz sdk. The reason is that vz sdk lacks rich remote authentication capabilities
of libvirt and if we choose to implement direct migration we have to
reimplement auth means of libvirt. This brings the requirement that destination
side should have running libvirt daemon. This is not the problem as vz is
moving in the direction of tight integration with libvirt.

Another issue of this choice is that if the managment migration fails on
'finish' step driver is supposed to resume on source.  This is not compatible
with vz sdk migration but this can be overcome without loosing a constistency,
see comments in code.

Technically we have a libvirt connection to destination in managed migration
scheme and we use this connection to obtain a session_uuid (which acts as authZ
token) for vz migration. This uuid is passed from destination through cookie
on 'prepare' step.

A few words on vz migration uri. I'd probably use just 'hostname:port' uris as
we don't have different migration schemes in vz but scheme part is mandatory,
so 'tcp' is used. Looks like good name.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |  250 
 src/vz/vz_sdk.c|   79 ++--
 src/vz/vz_sdk.h|2 +
 src/vz/vz_utils.h  |1 +
 4 files changed, 322 insertions(+), 10 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 9f0c52f..e003646 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1343,6 +1343,250 @@ vzDomainMemoryStats(virDomainPtr domain,
 return ret;
 }
 
+static int
+vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature)
+{
+switch (feature) {
+case VIR_DRV_FEATURE_MIGRATION_PARAMS:
+return 1;
+default:
+return 0;
+}
+}
+
+#define VZ_MIGRATION_PARAMETERS NULL
+
+static char *
+vzDomainMigrateBegin3Params(virDomainPtr domain,
+virTypedParameterPtr params,
+int nparams,
+char **cookieout ATTRIBUTE_UNUSED,
+int *cookieoutlen ATTRIBUTE_UNUSED,
+unsigned int fflags ATTRIBUTE_UNUSED)
+{
+virDomainObjPtr dom = NULL;
+char *xml = NULL;
+
+if (virTypedParamsValidate(params, nparams, VZ_MIGRATION_PARAMETERS)  0)
+goto cleanup;
+
+if (!(dom = vzDomObjFromDomain(domain)))
+goto cleanup;
+
+xml = virDomainDefFormat(dom-def, VIR_DOMAIN_DEF_FORMAT_SECURE);
+
+ cleanup:
+if (dom)
+virObjectUnlock(dom);
+
+return xml;
+}
+
+/* return 'hostname' */
+static char *
+vzCreateMigrateUri(void)
+{
+char *hostname = NULL;
+char *out = NULL;
+virURI uri = {};
+
+if ((hostname = virGetHostname()) == NULL)
+goto cleanup;
+
+if (STRPREFIX(hostname, localhost)) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s,
+   _(hostname on destination resolved to localhost,
+  but migration requires an FQDN));
+goto cleanup;
+}
+
+/* to set const string to non-const */
+if (VIR_STRDUP(uri.scheme, tcp)  0)
+goto cleanup;
+uri.server = hostname;
+out = virURIFormat(uri);
+
+ cleanup:
+VIR_FREE(hostname);
+VIR_FREE(uri.scheme);
+return out;
+}
+
+static int
+vzDomainMigratePrepare3Params(virConnectPtr dconn,
+  virTypedParameterPtr params ATTRIBUTE_UNUSED,
+  int nparams ATTRIBUTE_UNUSED,
+  const char *cookiein ATTRIBUTE_UNUSED,
+  int cookieinlen ATTRIBUTE_UNUSED,
+  char **cookieout,
+  int *cookieoutlen,
+  char **uri_out,
+  unsigned int fflags ATTRIBUTE_UNUSED)
+{
+vzConnPtr privconn = dconn-privateData;
+int ret = -1;
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+
+*cookieout = NULL;
+*uri_out = NULL;
+
+virUUIDFormat(privconn-session_uuid, uuidstr);
+if (VIR_STRDUP(*cookieout, uuidstr)  0)
+goto cleanup;
+*cookieoutlen = strlen(*cookieout) + 1;
+
+if (!(*uri_out = vzCreateMigrateUri()))
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+if (ret != 0) {
+VIR_FREE(*cookieout);
+VIR_FREE(*uri_out);
+*cookieoutlen = 0;
+}
+
+return ret;
+}
+
+static int
+vzDomainMigratePerform3Params(virDomainPtr domain,
+  const char *dconnuri ATTRIBUTE_UNUSED,
+  virTypedParameterPtr params,
+   

[libvirt] [PATCH 3/6] vz: support domain rename on migrate

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

---
 src/vz/vz_driver.c |   12 +---
 src/vz/vz_sdk.c|5 +++--
 src/vz/vz_sdk.h|5 -
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index d5cbdc6..a42597c 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1354,7 +1354,9 @@ vzConnectSupportsFeature(virConnectPtr conn 
ATTRIBUTE_UNUSED, int feature)
 }
 }
 
-#define VZ_MIGRATION_PARAMETERS NULL
+#define VZ_MIGRATION_PARAMETERS \
+VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+NULL
 
 static char *
 vzDomainMigrateBegin3Params(virDomainPtr domain,
@@ -1558,12 +1560,16 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 virDomainObjPtr dom = NULL;
 const char *uri = NULL;
 unsigned char session_uuid[VIR_UUID_BUFLEN];
+const char *dname = NULL;
 
 *cookieout = NULL;
 
 if (virTypedParamsGetString(params, nparams,
 VIR_MIGRATE_PARAM_URI,
-uri)  0)
+uri)  0 ||
+virTypedParamsGetString(params, nparams,
+VIR_MIGRATE_PARAM_DEST_NAME,
+dname)  0)
 goto cleanup;
 
 if (!(dom = vzDomObjFromDomain(domain)))
@@ -1578,7 +1584,7 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 if (vzParseCookie1(cookiein, session_uuid)  0)
 goto cleanup;
 
-if (prlsdkMigrate(dom, uri, session_uuid)  0)
+if (prlsdkMigrate(dom, uri, session_uuid, dname)  0)
 goto cleanup;
 
 if (!(*cookieout = vzFormatCookie2(dom-def-uuid)))
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index a329c68..f1fa6da 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -4067,7 +4067,7 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
 #define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
 
 int prlsdkMigrate(virDomainObjPtr dom, const char* uri_str,
-  const unsigned char *session_uuid)
+  const unsigned char *session_uuid, const char *dname)
 {
 int ret = -1;
 vzDomObjPtr privdom = dom-privateData;
@@ -4081,7 +4081,8 @@ int prlsdkMigrate(virDomainObjPtr dom, const char* 
uri_str,
 goto cleanup;
 
 prlsdkUUIDFormat(session_uuid, uuidstr);
-job = PrlVm_MigrateEx(privdom-sdkdom, uri-server, uri-port, uuidstr,
+job = PrlVm_MigrateWithRenameEx(privdom-sdkdom, uri-server, uri-port, 
uuidstr,
+  dname == NULL ?  : dname,
   , /* use default dir for migrated instance bundle 
*/
   PRLSDK_MIGRATION_FLAGS,
   0, /* reserved flags */
diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h
index 1a90eca..971f913 100644
--- a/src/vz/vz_sdk.h
+++ b/src/vz/vz_sdk.h
@@ -77,4 +77,7 @@ prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned 
long long *time);
 int
 prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, 
unsigned int nr_stats);
 int
-prlsdkMigrate(virDomainObjPtr dom, const char* uri_str, const char unsigned 
*session_uuid);
+prlsdkMigrate(virDomainObjPtr dom,
+  const char* uri_str,
+  const unsigned char *session_uuid,
+  const char* dname);
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 6/6] vz: cleanup: define vz format of uuids

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

vz puts uuids into curly braces. Simply introduce new contstant to reflect this
and get rid of magic +2 in code.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_sdk.c   |   12 ++--
 src/vz/vz_utils.h |2 ++
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 7646796..187fcec 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -239,7 +239,7 @@ prlsdkConnect(vzConnPtr privconn)
 PRL_HANDLE job = PRL_INVALID_HANDLE;
 PRL_HANDLE result = PRL_INVALID_HANDLE;
 PRL_HANDLE response = PRL_INVALID_HANDLE;
-char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+char session_uuid[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
 
 pret = PrlSrv_Create(privconn-server);
@@ -319,7 +319,7 @@ prlsdkUUIDFormat(const unsigned char *uuid, char *uuidstr)
 static PRL_HANDLE
 prlsdkSdkDomainLookupByUUID(vzConnPtr privconn, const unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_HANDLE sdkdom = PRL_INVALID_HANDLE;
 
 prlsdkUUIDFormat(uuid, uuidstr);
@@ -368,7 +368,7 @@ prlsdkGetDomainIds(PRL_HANDLE sdkdom,
char **name,
unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 len;
 PRL_RESULT pret;
 
@@ -1725,7 +1725,7 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR 
opaque)
 vzConnPtr privconn = opaque;
 PRL_RESULT pret = PRL_ERR_FAILURE;
 PRL_HANDLE_TYPE handleType;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 unsigned char uuid[VIR_UUID_BUFLEN];
 PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr);
 PRL_EVENT_TYPE prlEventType;
@@ -3483,7 +3483,7 @@ prlsdkDoApplyConfig(virConnectPtr conn,
 {
 PRL_RESULT pret;
 size_t i;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 bool needBoot = true;
 char *mask = NULL;
 
@@ -4073,7 +4073,7 @@ int prlsdkMigrate(virDomainObjPtr dom, const char* 
uri_str,
 vzDomObjPtr privdom = dom-privateData;
 virURIPtr uri = NULL;
 PRL_HANDLE job = PRL_INVALID_HANDLE;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS;
 
 uri = virURIParse(uri_str);
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index a779b03..98a8f77 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -55,6 +55,8 @@
 # define PARALLELS_REQUIRED_BRIDGED_NETWORK  Bridged
 # define PARALLELS_BRIDGED_NETWORK_TYPE  bridged
 
+# define VZ_UUID_STRING_BUFLEN (VIR_UUID_STRING_BUFLEN + 2)
+
 struct _vzConn {
 virMutex lock;
 
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 2/6] vz: pass cookies in xml form

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

This way we can easily keep backward compatibility
in the future.

Use 2 distinct cookies format:
1 - between phases 'prepare' and 'perform'
2 - between phases 'perform' and 'finish'
I see no reason to use unified format like in qemu yet.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_driver.c |  111 ++--
 src/vz/vz_sdk.c|3 +
 2 files changed, 102 insertions(+), 12 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index e003646..d5cbdc6 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1412,6 +1412,101 @@ vzCreateMigrateUri(void)
 return out;
 }
 
+static char*
+vzFormatCookie1(const unsigned char *session_uuid)
+{
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+virBuffer buf = VIR_BUFFER_INITIALIZER;
+
+virBufferAddLit(buf, vz-migration1\n);
+virUUIDFormat(session_uuid, uuidstr);
+virBufferAsprintf(buf, session_uuid%s/session_uuid\n, uuidstr);
+virBufferAddLit(buf, /vz-migration1\n);
+
+if (virBufferCheckError(buf)  0)
+return NULL;
+
+return virBufferContentAndReset(buf);
+}
+
+static char*
+vzFormatCookie2(const unsigned char *domain_uuid)
+{
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+virBuffer buf = VIR_BUFFER_INITIALIZER;
+
+virBufferAddLit(buf, vz-migration2\n);
+virUUIDFormat(domain_uuid, uuidstr);
+virBufferAsprintf(buf, domain_uuid%s/domain_uuid\n, uuidstr);
+virBufferAddLit(buf, /vz-migration2\n);
+
+if (virBufferCheckError(buf)  0)
+return NULL;
+
+return virBufferContentAndReset(buf);
+}
+
+static int
+vzParseCookie1(const char *xml, unsigned char *session_uuid)
+{
+xmlDocPtr doc = NULL;
+xmlXPathContextPtr ctx = NULL;
+char *tmp = NULL;
+int ret = -1;
+
+if (!(doc = virXMLParseStringCtxt(xml, _((_migration_cookie)), ctx)))
+goto cleanup;
+
+if (!(tmp = virXPathString(string(./session_uuid[1]), ctx))) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   %s, _(missing session_uuid element in migration 
data));
+goto cleanup;
+}
+if (virUUIDParse(tmp, session_uuid)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   %s, _(malformed session_uuid element in migration 
data));
+goto cleanup;
+}
+ret = 0;
+
+ cleanup:
+xmlXPathFreeContext(ctx);
+xmlFreeDoc(doc);
+VIR_FREE(tmp);
+
+return ret;
+}
+
+static int
+vzParseCookie2(const char *xml, unsigned char *domain_uuid)
+{
+xmlDocPtr doc = NULL;
+xmlXPathContextPtr ctx = NULL;
+char *tmp = NULL;
+int ret = -1;
+if (!(doc = virXMLParseStringCtxt(xml, _((_migration_cookie)), ctx)))
+goto cleanup;
+
+if (!(tmp = virXPathString(string(./domain_uuid[1]), ctx))) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   %s, _(missing domain_uuid element in migration 
data));
+goto cleanup;
+}
+if (virUUIDParse(tmp, domain_uuid)  0) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   %s, _(malformed domain_uuid element in migration 
data));
+goto cleanup;
+}
+ret = 0;
+
+ cleanup:
+xmlXPathFreeContext(ctx);
+xmlFreeDoc(doc);
+VIR_FREE(tmp);
+
+return ret;
+}
+
 static int
 vzDomainMigratePrepare3Params(virConnectPtr dconn,
   virTypedParameterPtr params ATTRIBUTE_UNUSED,
@@ -1425,13 +1520,11 @@ vzDomainMigratePrepare3Params(virConnectPtr dconn,
 {
 vzConnPtr privconn = dconn-privateData;
 int ret = -1;
-char uuidstr[VIR_UUID_STRING_BUFLEN];
 
 *cookieout = NULL;
 *uri_out = NULL;
 
-virUUIDFormat(privconn-session_uuid, uuidstr);
-if (VIR_STRDUP(*cookieout, uuidstr)  0)
+if (!(*cookieout = vzFormatCookie1(privconn-session_uuid)))
 goto cleanup;
 *cookieoutlen = strlen(*cookieout) + 1;
 
@@ -1465,7 +1558,6 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 virDomainObjPtr dom = NULL;
 const char *uri = NULL;
 unsigned char session_uuid[VIR_UUID_BUFLEN];
-char uuidstr[VIR_UUID_STRING_BUFLEN];
 
 *cookieout = NULL;
 
@@ -1483,14 +1575,13 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 goto cleanup;
 }
 
-if (virUUIDParse(cookiein, session_uuid)  0)
+if (vzParseCookie1(cookiein, session_uuid)  0)
 goto cleanup;
 
 if (prlsdkMigrate(dom, uri, session_uuid)  0)
 goto cleanup;
 
-virUUIDFormat(domain-uuid, uuidstr);
-if (VIR_STRDUP(*cookieout, uuidstr)  0)
+if (!(*cookieout = vzFormatCookie2(dom-def-uuid)))
 goto cleanup;
 *cookieoutlen = strlen(*cookieout) + 1;
 
@@ -1539,12 +1630,8 @@ vzDomainMigrateFinish3Params(virConnectPtr dconn,
 if (cancelled)
 return NULL;
 
-if (virUUIDParse(cookiein, domain_uuid)  0) {
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _(Could not parse 

[libvirt] [PATCH v2 0/3] driver level connection close event

2015-06-25 Thread nshirokovskiy
Notify of connection close event from parallels driver (possibly) wrapped in
the remote driver.

Changes from v1:
1. fix comment style issues
2. remove spurious whitespaces
3. move rpc related part from vz patch to second(rpc) patch
4. remove unnecessary locks for immutable closeCallback in first patch.

Discussion.

In 1 and 2 patch we forced to some decisions because we don't have a weak
reference mechanics.

1 patch.
---
virConnectCloseCallback is introduced because we can not reference the
connection object itself when setting a network layer callback because of how
connection close works.

A connection close procedure is next:
1. client closes connection
2. a this point nobody else referencing a connection and it is disposed
3. connection dispose unreferencing network connection
4. network connection disposes

Thus if we referece a connection in network close callback we never get step 2.
virConnectCloseCallback broke this cycle but at cost that clients MUST
unregister explicitly before closing connection. This is not good as this
unregistration is not really neaded. Client is not telling that it does not
want to receive events anymore but rather forced to obey some
implementation-driven rules.

2 patch.
---
We impose requirements on driver implementations which is fragile. Moreover we
again need to make explicit unregistrations. Implementation of domain events
illustrates this point. remoteDispatchConnectDomainEventRegister does not
reference NetClient and makes unregistration before NetClient is disposed but
drivers do not meet the formulated requirements. Object event system release
lock before delivering event for re-entrance purposes.

Shortly we have 2 undesired consequences here.
1. Mandatory unregistration.
2. Imposing multi-threading requirements.

Introduction of weak pointers could free us from these artifacts. Next weak
reference workflow illustrates this.

1. Take weak reference on object of interest before passing to party. This
   doesn't break disposing mechanics as weak eference does not prevent from
disposing object. Object is disposed but memory is not freed yet if there are
weak references.

2. When callback is called we are safe to check if pointer dangling as we make
   a weak reference before.

3. Release weak reference and this trigger memory freeing if there are no more
   weak references.

 daemon/libvirtd.h|1 +
 daemon/remote.c  |   86 +++
 src/datatypes.c  |  115 +++--
 src/datatypes.h  |   21 ++--
 src/driver-hypervisor.h  |   12 
 src/libvirt-host.c   |   77 +---
 src/remote/remote_driver.c   |  106 +-
 src/remote/remote_protocol.x |   24 -
 src/remote_protocol-structs  |6 ++
 src/vz/vz_driver.c   |   26 +
 src/vz/vz_sdk.c  |   29 +++
 src/vz/vz_utils.h|3 +

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 2/3] daemon: relay connection close event related functions

2015-06-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Add conneciton close subscription/unsubscription and event rpc.
Now remote driver firing connection close event in 2 cases.

1. connection to daemon closed, as previously
2. as a relay of connection close event from wrapped driver

As it commented out in remoteDispatchConnectCloseCallbackRegister we impose
some multi-thread requirements on drivers implementations. This is the same
approach as in for example remoteDispatchConnectDomainEventRegister.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 daemon/libvirtd.h|1 +
 daemon/remote.c  |   86 ++
 src/remote/remote_driver.c   |   53 +-
 src/remote/remote_protocol.x |   24 +++-
 src/remote_protocol-structs  |6 +++
 5 files changed, 167 insertions(+), 3 deletions(-)

diff --git a/daemon/libvirtd.h b/daemon/libvirtd.h
index 8c1a904..5b2d2ca 100644
--- a/daemon/libvirtd.h
+++ b/daemon/libvirtd.h
@@ -60,6 +60,7 @@ struct daemonClientPrivate {
 size_t nnetworkEventCallbacks;
 daemonClientEventCallbackPtr *qemuEventCallbacks;
 size_t nqemuEventCallbacks;
+bool closeRegistered;
 
 # if WITH_SASL
 virNetSASLSessionPtr sasl;
diff --git a/daemon/remote.c b/daemon/remote.c
index e9e2dca..366ecd5 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -1189,6 +1189,20 @@ remoteRelayDomainQemuMonitorEvent(virConnectPtr conn,
 VIR_FREE(details_p);
 }
 
+static
+void remoteRelayConnectionClosedEvent(virConnectPtr conn ATTRIBUTE_UNUSED, int 
reason, void *opaque)
+{
+virNetServerClientPtr client = opaque;
+
+VIR_DEBUG(Relaying connection closed event, reason %d, reason);
+
+remote_connect_event_connection_closed_msg msg = { reason };
+remoteDispatchObjectEventSend(client, remoteProgram,
+  REMOTE_PROC_CONNECT_EVENT_CONNECTION_CLOSED,
+  
(xdrproc_t)xdr_remote_connect_event_connection_closed_msg,
+  msg);
+}
+
 /*
  * You must hold lock for at least the client
  * We don't free stuff here, merely disconnect the client's
@@ -1251,6 +1265,12 @@ void remoteClientFreeFunc(void *data)
 }
 VIR_FREE(priv-qemuEventCallbacks);
 
+if (priv-closeRegistered) {
+if (virConnectUnregisterCloseCallback(priv-conn,
+  
remoteRelayConnectionClosedEvent)  0)
+VIR_WARN(unexpected close callback event deregister failure);
+}
+
 virConnectClose(priv-conn);
 
 virIdentitySetCurrent(NULL);
@@ -3489,6 +3509,72 @@ remoteDispatchNodeDeviceGetParent(virNetServerPtr server 
ATTRIBUTE_UNUSED,
 return rv;
 }
 
+static int
+remoteDispatchConnectCloseCallbackRegister(virNetServerPtr server 
ATTRIBUTE_UNUSED,
+   virNetServerClientPtr client,
+   virNetMessagePtr msg 
ATTRIBUTE_UNUSED,
+   virNetMessageErrorPtr rerr)
+{
+int rv = -1;
+struct daemonClientPrivate *priv =
+virNetServerClientGetPrivateData(client);
+
+virMutexLock(priv-lock);
+
+if (!priv-conn) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s, _(connection not open));
+goto cleanup;
+}
+
+/* NetClient is passed to driver but not referenced.
+   This imposes next requirements on drivers implementation.
+   Driver must serialize unregistering and event delivering operations.
+   Thus as we unregister callback before unreferencing NetClient
+   remoteRelayConnectionClosedEvent is safe to use NetClient. */
+if (virConnectRegisterCloseCallback(priv-conn, 
remoteRelayConnectionClosedEvent, client, NULL)  0)
+goto cleanup;
+
+priv-closeRegistered = true;
+
+rv = 0;
+
+ cleanup:
+virMutexUnlock(priv-lock);
+if (rv  0)
+virNetMessageSaveError(rerr);
+return rv;
+}
+
+static int
+remoteDispatchConnectCloseCallbackUnregister(virNetServerPtr server 
ATTRIBUTE_UNUSED,
+ virNetServerClientPtr client,
+ virNetMessagePtr msg 
ATTRIBUTE_UNUSED,
+ virNetMessageErrorPtr rerr)
+{
+int rv = -1;
+struct daemonClientPrivate *priv =
+virNetServerClientGetPrivateData(client);
+
+virMutexLock(priv-lock);
+
+if (!priv-conn) {
+virReportError(VIR_ERR_INTERNAL_ERROR, %s, _(connection not open));
+goto cleanup;
+}
+
+if (virConnectUnregisterCloseCallback(priv-conn, 
remoteRelayConnectionClosedEvent)  0)
+goto cleanup;
+
+priv-closeRegistered = false;
+
+rv = 0;
+
+ cleanup:
+virMutexUnlock(priv-lock);
+if (rv  0)
+virNetMessageSaveError(rerr);
+return rv;
+}
 
 /***
  * Register / 

[libvirt] [PATCH v2 3/3] vz: implement connection close notification

2015-06-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Reuse virConnectCloseCallback to implement connection close event functions.
This way we automatically meet multi-thread requirements on 
unregistering/notification.
---
 src/vz/vz_driver.c |   26 ++
 src/vz/vz_sdk.c|   29 +
 src/vz/vz_utils.h  |3 +++
 3 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index d9ddd4f..e3d0fdc 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -268,6 +268,9 @@ vzOpenDefault(virConnectPtr conn)
 if (prlsdkSubscribeToPCSEvents(privconn))
 goto error;
 
+if (!(privconn-closeCallback = virGetConnectCloseCallback()))
+goto error;
+
 conn-privateData = privconn;
 
 if (prlsdkLoadDomains(privconn))
@@ -276,6 +279,8 @@ vzOpenDefault(virConnectPtr conn)
 return VIR_DRV_OPEN_SUCCESS;
 
  error:
+virObjectUnref(privconn-closeCallback);
+privconn-closeCallback = NULL;
 virObjectUnref(privconn-domains);
 virObjectUnref(privconn-caps);
 virStoragePoolObjListFree(privconn-pools);
@@ -350,6 +355,8 @@ vzConnectClose(virConnectPtr conn)
 virObjectUnref(privconn-caps);
 virObjectUnref(privconn-xmlopt);
 virObjectUnref(privconn-domains);
+virObjectUnref(privconn-closeCallback);
+privconn-closeCallback = NULL;
 virObjectEventStateFree(privconn-domainEventState);
 prlsdkDisconnect(privconn);
 conn-privateData = NULL;
@@ -1337,6 +1344,23 @@ vzDomainBlockStatsFlags(virDomainPtr domain,
 return ret;
 }
 
+static int
+vzConnectRegisterCloseCallback(virConnectPtr conn,
+   virConnectCloseFunc cb,
+   void *opaque,
+   virFreeCallback freecb)
+{
+vzConnPtr privconn = conn-privateData;
+return virConnectCloseCallbackRegister(privconn-closeCallback, conn, cb, 
opaque, freecb);
+}
+
+static int
+vzConnectUnregisterCloseCallback(virConnectPtr conn, virConnectCloseFunc cb 
ATTRIBUTE_UNUSED)
+{
+vzConnPtr privconn = conn-privateData;
+virConnectCloseCallbackUnregister(privconn-closeCallback);
+return 0;
+}
 
 static virHypervisorDriver vzDriver = {
 .name = vz,
@@ -1389,6 +1413,8 @@ static virHypervisorDriver vzDriver = {
 .domainGetMaxMemory = vzDomainGetMaxMemory, /* 1.2.15 */
 .domainBlockStats = vzDomainBlockStats, /* 1.3.0 */
 .domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.3.0 */
+.connectRegisterCloseCallback = vzConnectRegisterCloseCallback, /* 1.3.0 */
+.connectUnregisterCloseCallback = vzConnectUnregisterCloseCallback, /* 
1.3.0 */
 };
 
 static virConnectDriver vzConnectDriver = {
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 388ea19..5f38709 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -1745,6 +1745,32 @@ prlsdkHandleVmEvent(vzConnPtr privconn, PRL_HANDLE 
prlEvent)
 return;
 }
 
+static
+void prlsdkHandleDispatcherConnectionClosed(vzConnPtr privconn)
+{
+virConnectCloseCallbackCall(privconn-closeCallback, 
VIR_CONNECT_CLOSE_REASON_EOF);
+}
+
+static void
+prlsdkHandleDispatcherEvent(vzConnPtr privconn, PRL_HANDLE prlEvent)
+{
+PRL_RESULT pret = PRL_ERR_FAILURE;
+PRL_EVENT_TYPE prlEventType;
+
+pret = PrlEvent_GetType(prlEvent, prlEventType);
+prlsdkCheckRetGoto(pret, error);
+
+switch (prlEventType) {
+case PET_DSP_EVT_DISP_CONNECTION_CLOSED:
+prlsdkHandleDispatcherConnectionClosed(privconn);
+break;
+default:
+VIR_DEBUG(Skipping dispatcher event of type %d, prlEventType);
+}
+ error:
+return;
+}
+
 static PRL_RESULT
 prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR opaque)
 {
@@ -1772,6 +1798,9 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR 
opaque)
 // above function takes own of event
 prlEvent = PRL_INVALID_HANDLE;
 break;
+case PIE_DISPATCHER:
+prlsdkHandleDispatcherEvent(privconn, prlEvent);
+break;
 default:
 VIR_DEBUG(Skipping event of issuer type %d, prlIssuerType);
 }
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index 9b46bf9..b0dc3d8 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -32,6 +32,7 @@
 # include conf/network_conf.h
 # include virthread.h
 # include virjson.h
+# include datatypes.h
 
 # define vzParseError() \
 virReportErrorHelper(VIR_FROM_TEST, VIR_ERR_OPERATION_FAILED, __FILE__,
\
@@ -69,6 +70,8 @@ struct _vzConn {
 virObjectEventStatePtr domainEventState;
 virStorageDriverStatePtr storageState;
 const char *drivername;
+/* Immutable pointer, self-locking APIs */
+virConnectCloseCallbackPtr closeCallback;
 };
 
 typedef struct _vzConn vzConn;
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 1/3] remote: move connection close callback to driver level

2015-06-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

1. Introduce connect(Un)RegisterCloseCallback driver functions.

2. virConnect(Un)RegisterCloseCallback now works through driver.

3. virConnectCloseCallback is factored from virConnect but mostly stay the
same. Notice however that virConnect object is not referenced in
virConnectCloseCallback anymore. It is safe. Explanation.

Previous version of callback object keeps reference to connection. This leads
to undocumented rule that all clients must exlicitly unregister close callback
before closing connection or connection will never be disposed. As callback
unregistering and close event delivering are serialized thru callback object
lock and unregistering zeros connection object we will never get dangling
pointer on delivering.

4. callback object doesn't check callback on unregistering. The reason is that
it will helps us write registering/unregistering with atomic behaviour for
remote driver as it can be seen in next patch. Moreover it is not really
meaningful to check callback on unregistering.

5. virNetClientSetCloseCallback call is removed from doRemoteClose as it is
excessive for the same reasons as in point 3. Unregistering MUST be called
and this prevents from firing event on close initiated by client.

I'm not sure where callback object should be so it stays in datatype.c

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/datatypes.c|  115 +---
 src/datatypes.h|   21 ++--
 src/driver-hypervisor.h|   12 +
 src/libvirt-host.c |   77 ++
 src/remote/remote_driver.c |   57 --
 5 files changed, 171 insertions(+), 111 deletions(-)

diff --git a/src/datatypes.c b/src/datatypes.c
index 12bcfc1..87a42cb 100644
--- a/src/datatypes.c
+++ b/src/datatypes.c
@@ -34,7 +34,7 @@
 VIR_LOG_INIT(datatypes);
 
 virClassPtr virConnectClass;
-virClassPtr virConnectCloseCallbackDataClass;
+virClassPtr virConnectCloseCallbackClass;
 virClassPtr virDomainClass;
 virClassPtr virDomainSnapshotClass;
 virClassPtr virInterfaceClass;
@@ -47,7 +47,7 @@ virClassPtr virStorageVolClass;
 virClassPtr virStoragePoolClass;
 
 static void virConnectDispose(void *obj);
-static void virConnectCloseCallbackDataDispose(void *obj);
+static void virConnectCloseCallbackDispose(void *obj);
 static void virDomainDispose(void *obj);
 static void virDomainSnapshotDispose(void *obj);
 static void virInterfaceDispose(void *obj);
@@ -78,7 +78,7 @@ virDataTypesOnceInit(void)
 DECLARE_CLASS_COMMON(basename, virClassForObjectLockable())
 
 DECLARE_CLASS_LOCKABLE(virConnect);
-DECLARE_CLASS_LOCKABLE(virConnectCloseCallbackData);
+DECLARE_CLASS_LOCKABLE(virConnectCloseCallback);
 DECLARE_CLASS(virDomain);
 DECLARE_CLASS(virDomainSnapshot);
 DECLARE_CLASS(virInterface);
@@ -119,14 +119,7 @@ virGetConnect(void)
 if (!(ret = virObjectLockableNew(virConnectClass)))
 return NULL;
 
-if (!(ret-closeCallback = 
virObjectLockableNew(virConnectCloseCallbackDataClass)))
-goto error;
-
 return ret;
-
- error:
-virObjectUnref(ret);
-return NULL;
 }
 
 /**
@@ -147,36 +140,102 @@ virConnectDispose(void *obj)
 virResetError(conn-err);
 
 virURIFree(conn-uri);
+}
 
-if (conn-closeCallback) {
-virObjectLock(conn-closeCallback);
-conn-closeCallback-callback = NULL;
-virObjectUnlock(conn-closeCallback);
+virConnectCloseCallbackPtr
+virGetConnectCloseCallback(void)
+{
+virConnectCloseCallbackPtr ret;
 
-virObjectUnref(conn-closeCallback);
-}
+if (virDataTypesInitialize()  0)
+return NULL;
+
+if (!(ret = virObjectLockableNew(virConnectCloseCallbackClass)))
+return NULL;
+
+return ret;
 }
 
+static void
+virConnectCloseCallbackClean(virConnectCloseCallbackPtr obj)
+{
+if (obj-freeCallback)
+obj-freeCallback(obj-opaque);
+
+obj-callback = NULL;
+obj-freeCallback = NULL;
+obj-opaque = NULL;
+obj-conn = NULL;
+}
 
-/**
- * virConnectCloseCallbackDataDispose:
- * @obj: the close callback data to release
- *
- * Release resources bound to the connection close callback.
- */
 static void
-virConnectCloseCallbackDataDispose(void *obj)
+virConnectCloseCallbackDispose(void *obj ATTRIBUTE_UNUSED)
+{
+/* nothing really to do here */
+}
+
+int
+virConnectCloseCallbackRegister(virConnectCloseCallbackPtr obj,
+virConnectPtr conn,
+virConnectCloseFunc cb,
+void *opaque,
+virFreeCallback freecb)
 {
-virConnectCloseCallbackDataPtr cb = obj;
+int ret = -1;
 
-virObjectLock(cb);
+virObjectLock(obj);
 
-if (cb-freeCallback)
-cb-freeCallback(cb-opaque);
+if (obj-callback) {
+/* Temporarily remove reporting to fix syntax-check.
+   Proper 

[libvirt] [PATCH] vz: fix SDK event dispatching

2015-06-25 Thread nshirokovskiy
From: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com

Current version of SDK event dispatcing is incorrect. For most VM events (add,
delete etc) issuer type is PIE_DISPATCHER. Actually analyzing issuer type
doesn't have any benifints so this patch get rid of it. All dispatching is done
only on event type.

Signed-off-by: Nikolay Shirokovskiy nshirokovs...@virtuozzo.com
---
 src/vz/vz_sdk.c |   58 +++---
 1 files changed, 16 insertions(+), 42 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 98f7a57..2ca74c4 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -1697,21 +1697,33 @@ prlsdkHandlePerfEvent(vzConnPtr privconn,
 return PRL_ERR_SUCCESS;
 }
 
-static void
-prlsdkHandleVmEvent(vzConnPtr privconn, PRL_HANDLE prlEvent)
+static PRL_RESULT
+prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR opaque)
 {
+vzConnPtr privconn = opaque;
 PRL_RESULT pret = PRL_ERR_FAILURE;
+PRL_HANDLE_TYPE handleType;
 char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
 unsigned char uuid[VIR_UUID_BUFLEN];
 PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr);
 PRL_EVENT_TYPE prlEventType;
 
-pret = PrlEvent_GetType(prlEvent, prlEventType);
+pret = PrlHandle_GetType(prlEvent, handleType);
 prlsdkCheckRetGoto(pret, cleanup);
 
+/* Currently, there is no need to handle anything but events */
+if (handleType != PHT_EVENT)
+goto cleanup;
+
+if (privconn == NULL)
+goto cleanup;
+
 pret = PrlEvent_GetIssuerId(prlEvent, uuidstr, bufsize);
 prlsdkCheckRetGoto(pret, cleanup);
 
+pret = PrlEvent_GetType(prlEvent, prlEventType);
+prlsdkCheckRetGoto(pret, cleanup);
+
 if (prlsdkUUIDParse(uuidstr, uuid)  0)
 goto cleanup;
 
@@ -1736,44 +1748,7 @@ prlsdkHandleVmEvent(vzConnPtr privconn, PRL_HANDLE 
prlEvent)
 prlEvent = PRL_INVALID_HANDLE;
 break;
 default:
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _(Can't handle event of type %d), prlEventType);
-}
-
- cleanup:
-PrlHandle_Free(prlEvent);
-return;
-}
-
-static PRL_RESULT
-prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR opaque)
-{
-vzConnPtr privconn = opaque;
-PRL_RESULT pret = PRL_ERR_FAILURE;
-PRL_HANDLE_TYPE handleType;
-PRL_EVENT_ISSUER_TYPE prlIssuerType = PIE_UNKNOWN;
-
-pret = PrlHandle_GetType(prlEvent, handleType);
-prlsdkCheckRetGoto(pret, cleanup);
-
-/* Currently, there is no need to handle anything but events */
-if (handleType != PHT_EVENT)
-goto cleanup;
-
-if (privconn == NULL)
-goto cleanup;
-
-PrlEvent_GetIssuerType(prlEvent, prlIssuerType);
-prlsdkCheckRetGoto(pret, cleanup);
-
-switch (prlIssuerType) {
-case PIE_VIRTUAL_MACHINE:
-prlsdkHandleVmEvent(privconn, prlEvent);
-// above function takes own of event
-prlEvent = PRL_INVALID_HANDLE;
-break;
-default:
-VIR_DEBUG(Skipping event of issuer type %d, prlIssuerType);
+VIR_DEBUG(Skipping event of type %d, prlEventType);
 }
 
  cleanup:
@@ -1781,7 +1756,6 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR 
opaque)
 return PRL_ERR_SUCCESS;
 }
 
-
 int prlsdkSubscribeToPCSEvents(vzConnPtr privconn)
 {
 PRL_RESULT pret = PRL_ERR_UNINITIALIZED;
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list