On Fri, Apr 06, 2018 at 11:54:03AM +0200, Thomas Lamprecht wrote:
> Move the locking inside worker, so that the process doing the actual
> work (create or restore) holds the lock, and can call functions which
> do locking without deadlocking.
> 
> This mirrors the behaviour we use for containers, and allows to add
> an 'autostart' parameter which starts the VM after successful
> creation. vm_start needs the lock and as not the worker but it's
> parents held it, it couldn't know that it was actually save to
> continue...
> 
> Signed-off-by: Thomas Lamprecht <t.lampre...@proxmox.com>
> ---
> 
> I discussed this with Fabian a few months ago and have something in
> mind that this shouldn't be that easy, but I cannot remember what
> exactly that reason was, so RFC. :)

there is one issue - if somebody holds the flock and you only realize it
after you have forked, you did a fork for nothing (and instead of a
rather fast "timeout" error message, you have to check the task log.
this is not nice from a usability perspective, although it should not
cause problems from a technical/lockdep one ;)

the clean solution is

flock {
  set_config_lock
}

fork {
  do stuff
  flock {
    (re)read config
    check_config_lock
    modify/write config
  }
  do some more stuff
  flock {
    (re)read config
    check_config_lock
    remove_config_lock
    final modify/write config
  }
}

of course, this is much more involved and harder to get all corner cases
right ;) I would like to move all the create/restore/clone API paths to
this flow in the long run, but I am not opposed to switching the
fork/lock order in places where we need the flock sooner/now.

> 
>  PVE/API2/Qemu.pm | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 0f27d29..8695e94 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -560,7 +560,7 @@ __PACKAGE__->register_method({
>           # ensure no old replication state are exists
>           PVE::ReplicationState::delete_guest_states($vmid);
>  
> -         return $rpcenv->fork_worker('qmrestore', $vmid, $authuser, 
> $realcmd);
> +         return PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd);
>       };
>  
>       my $createfn = sub {
> @@ -607,10 +607,13 @@ __PACKAGE__->register_method({
>               PVE::AccessControl::add_vm_to_pool($vmid, $pool) if $pool;
>           };
>  
> -         return $rpcenv->fork_worker('qmcreate', $vmid, $authuser, $realcmd);
> +         return PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd);
>       };
>  
> -     return PVE::QemuConfig->lock_config_full($vmid, 1, $archive ? 
> $restorefn : $createfn);
> +     my $worker_name = $archive ? 'qmrestore' : 'qmcreate';
> +     my $code = $archive ? $restorefn : $createfn;
> +
> +     return $rpcenv->fork_worker($worker_name, $vmid, $authuser, $code);
>      }});
>  
>  __PACKAGE__->register_method({
> -- 
> 2.14.2
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to