Re: [pve-devel] RFC : pve-manager : screenshot of template-cloning feature

2012-12-20 Thread Dietmar Maurer
 
 Don't know, But maybe ,is is exist some opensource icons library for this ?

google is your friend
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] get config from an external VM

2012-12-20 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

just some ideas / suggestions.

First i'm really sorry that i got your owner = 0 wrong. I thought about 
a perl boolean 0/1 and tought you meant $owner != $vmid. That's why i 
was still working with vmids instead of 0.



 Nothing? My suggestion is that we do not delete shared disks 
automatically.


If we don't care about all corner cases with shared images. Wouldn't it 
be easier to leave then the disk assigned to a VM instead of using 0? 
Then it is at least possible to delete the volume through PVE.


Another idea would be to use vmid 0 and then have a list of shared 
volumes under

/etc/pve/sharedvol.cfg:
vm-0-disk-1: 123,125,126
vm-0-disk-2: 123,199,101

which can be read and modified by all nodes. So if the last reference to 
the volume get's deleted - the volume itself gets deleted too.


Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] balloon : reset pooling if balloon driver doesn't return memory stats

2012-12-20 Thread Alexandre DERUMIER
I uploaded a fixed version - please test. 

Works perfectly ! (win2003  win2008R2)

Thanks !



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 09:23:35 
Objet: RE: [pve-devel] [PATCH] balloon : reset pooling if balloon driver 
doesn't return memory stats 

Just found the bug - I forgot to re-arm the timer. 

I uploaded a fixed version - please test. 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier 
 Sent: Mittwoch, 19. Dezember 2012 16:53 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] [PATCH] balloon : reset pooling if balloon driver 
 doesn't 
 return memory stats 
 
 fix windows stats (tested on win2003  win2008R2) 
 
 Signed-off-by: Alexandre Derumier aderum...@odiso.com 
 --- 
 PVE/QemuServer.pm | 13 + 
 1 file changed, 13 insertions(+) 
 
 diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 
 81a9351..7569d55 100644 
 --- a/PVE/QemuServer.pm 
 +++ b/PVE/QemuServer.pm 
 @@ -2057,6 +2057,19 @@ sub vmstatus { 
 $d-{freemem} = $info-{free_mem}; 
 } 
 
 + if (defined($info-{last_update})  !defined($info-{free_mem})){ 
 + $qmpclient-queue_cmd($vmid, undef, 'qom-set', 
 + path = machine/peripheral/balloon0, 
 + property = stats-polling-interval, 
 + value = 0); 
 + 
 + $qmpclient-queue_cmd($vmid, undef, 'qom-set', 
 + path = machine/peripheral/balloon0, 
 + property = stats-polling-interval, 
 + value = 2); 
 + } 
 + 
 + 
 }; 
 
 my $blockstatscb = sub { 
 -- 
 1.7.10.4 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Stefan Priebe - Profihost AG

Hello list,

i've massive migration problems since switching to qemu 1.3. Mostly the 
migration just hangs never finishes and suddenly the vm is just dead / 
not running anymore.


Has anybody seen this too?

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
Do you mean migration between qemu-kvm 1.2 - qemu 1.3 ?

(because it's not supported)

or migration between qemu 1.3 - qemu 1.3 ?


also,I don't know if it's related, but in the changelog:
http://wiki.qemu.org/ChangeLog/1.3

Live Migration, Save/Restore
The stop and cont commands have new semantics on the destination machine 
during migration. Previously, the outcome depended on whether the commands were 
issued before or after the source connected to the destination QEMU: in 
particular, cont would fail if issued before connection, and undo the 
effect of the -S command-line option if issued after. Starting from this 
version, the effect of stop and cont will always take place at the end of 
migration (overriding the presence or absence of the -S option) and cont will 
never fail. This change should be transparent, since the old behavior was 
usually subject to a race condition.


- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 09:46:01 
Objet: [pve-devel] migration problems since qemu 1.3 

Hello list, 

i've massive migration problems since switching to qemu 1.3. Mostly the 
migration just hangs never finishes and suddenly the vm is just dead / 
not running anymore. 

Has anybody seen this too? 

Greets, 
Stefan 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Stefan Priebe - Profihost AG

Hi Alexandre,
Am 20.12.2012 09:50, schrieb Alexandre DERUMIER:

Do you mean migration between qemu-kvm 1.2 - qemu 1.3 ?

No.


or migration between qemu 1.3 - qemu 1.3 ?
Yes. It works fine with NEWLY started VMs but if the VMs are running 
more than 1-3 days. It stops working and the VMs just crahs during 
migration.



also,I don't know if it's related, but in the changelog:
http://wiki.qemu.org/ChangeLog/1.3

Live Migration, Save/Restore
The stop and cont commands have new semantics on the destination machine during migration. Previously, the outcome depended 
on whether the commands were issued before or after the source connected to the destination QEMU: in particular, cont would fail if 
issued before connection, and undo the effect of the -S command-line option if issued after. Starting from this version, the effect of 
stop and cont will always take place at the end of migration (overriding the presence or absence of the -S option) and 
cont will never fail. This change should be transparent, since the old behavior was usually subject to a race condition.
i don't see how this could result in a crash of the whole VM and a 
migration not working at all.


Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] get config from an external VM

2012-12-20 Thread Dietmar Maurer
 First i'm really sorry that i got your owner = 0 wrong. I thought about a perl
 boolean 0/1 and tought you meant $owner != $vmid. That's why i was still
 working with vmids instead of 0.
 
  
   Nothing? My suggestion is that we do not delete shared disks
 automatically.
 
 If we don't care about all corner cases with shared images. Wouldn't it be
 easier to leave then the disk assigned to a VM instead of using 0?

No, because if you delete the wrong VM you run into problems.

 Then it is at least possible to delete the volume through PVE.

You can delete the volume on the storage view?
 
 Another idea would be to use vmid 0 and then have a list of shared volumes
 under
 /etc/pve/sharedvol.cfg:
 vm-0-disk-1: 123,125,126
 vm-0-disk-2: 123,199,101
 
 which can be read and modified by all nodes. 

Each storage has a content view - why don't we manage shared volumes there?

 So if the last reference to the
 volume get's deleted - the volume itself gets deleted too.

I consider that very dangerous. I would prefer to use the content view to 
alloc/free shared volumes.
Won't that work for you?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] live migration doesn't start with balloon enabled

2012-12-20 Thread Alexandre DERUMIER
Hi dietmar,

with balloon, migration doesn't start,
because of qmp command send at start:



  if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd($vmid, 'qom-set',
   path = machine/peripheral/balloon0,
   property = stats-polling-interval,
   value = 2);
}



sub vm_qmp_command {

  die VM $vmid not running\n if !check_running($vmid, $nocheck);



(we should pass $migratefrom to check_running)



do you know how to handle that ? (passing migratedfrom in differents subs ?)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] live migration doesn't start with balloon enabled

2012-12-20 Thread Alexandre DERUMIER
or maybe in

sub check_running {
 die unable to find configuration file for VM $vmid - no such machine\n
if !$nocheck  ! -f $filename;


do we really to check if $filename exist ?



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 11:38:18 
Objet: [pve-devel] live migration doesn't start with balloon enabled 

Hi dietmar, 

with balloon, migration doesn't start, 
because of qmp command send at start: 



if (!defined($conf-{balloon}) || $conf-{balloon}) { 
vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024) 
if $conf-{balloon}; 

vm_mon_cmd($vmid, 'qom-set', 
path = machine/peripheral/balloon0, 
property = stats-polling-interval, 
value = 2); 
} 



sub vm_qmp_command { 

die VM $vmid not running\n if !check_running($vmid, $nocheck); 



(we should pass $migratefrom to check_running) 



do you know how to handle that ? (passing migratedfrom in differents subs ?) 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] get config from an external VM

2012-12-20 Thread Stefan Priebe - Profihost AG

Hi Dietmar,

thanks a lot for your suggestions.

Am 20.12.2012 10:42, schrieb Dietmar Maurer:

If we don't care about all corner cases with shared images. Wouldn't it be
easier to leave then the disk assigned to a VM instead of using 0?


No, because if you delete the wrong VM you run into problems.

OK


Then it is at least possible to delete the volume through PVE.


You can delete the volume on the storage view?
Oh i didn't knew that. Right now at least for RBD storage the remove 
button is disabled. Is this defined by storage plugin?



which can be read and modified by all nodes.

Each storage has a content view - why don't we manage shared volumes there?

Good idea. I like this.

Would you agree with this?

Content view of a specific storage:
1.) put shared volumes (id 0) under a seperate tree (shared images)
2.) enable remove/delete button in content view for shared volumes
3.) add a create shared disk button beside remove
4.) add an assign volume to VM button beside remove or should this be 
under VM = Hardware = add = shared disk?


Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
with last git, I think it's related to balloon driver enabled by default, and 
qmp command send (see my previous mail).


can you try to replace (in QemuServer.pm)

if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd($vmid, 'qom-set',
   path = machine/peripheral/balloon0,
   property = stats-polling-interval,
   value = 2);
}

by

if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd_nocheck($vmid, 'qom-set',
   path = machine/peripheral/balloon0,
   property = stats-polling-interval,
   value = 2);
}


(vm_mon_cmd_nocheck)

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 11:48:06 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 

Am 20.12.2012 10:04, schrieb Alexandre DERUMIER: 
 Yes. It works fine with NEWLY started VMs but if the VMs are running 
 more than 1-3 days. It stops working and the VMs just crahs during 
 migration. 
 Maybe vm running since 1-3 days,have more memory used, so I take more time to 
 live migrate. 

I see totally different outputs - the vm crashes and the status output 
stops. 

with git from yesterday i'm just getting this: 
-- 
Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203' 
(10.255.0.22) 
Dec 20 11:34:21 copying disk images 
Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203' 
Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o 
'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock 
--migratedfrom cloud1-1202' failed: exit code 255 
Dec 20 11:34:23 aborting phase 2 - cleanup resources 
Dec 20 11:34:24 ERROR: migration finished with problems (duration 00:00:03) 
TASK ERROR: migration problems 
-- 


 Does it crash at start of the migration ? or in the middle of the migration ? 

At the beginning mostly i see no more output after: 
migration listens on port 6 


 what is your vm conf ? (memory size, storage ?) 
2GB mem, RBD / Ceph Storage 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] usr vm_mon_cmd_nocheck for balloon qmp command at vm_start

2012-12-20 Thread Alexandre Derumier
fix live migration, as we don't have the vm config file

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 81a9351..9c64757 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3001,9 +3001,9 @@ sub vm_start {
# fixme: how do we handle that on migration?
 
if (!defined($conf-{balloon}) || $conf-{balloon}) {
-   vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024) 
+   vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024) 
if $conf-{balloon};
-   vm_mon_cmd($vmid, 'qom-set', 
+   vm_mon_cmd_nocheck($vmid, 'qom-set', 
   path = machine/peripheral/balloon0, 
   property = stats-polling-interval, 
   value = 2);
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Stefan Priebe - Profihost AG

Hi,

at least migration works at all ;-) I'll wait until tomorrow and test 
again. I've restarted all VMs with latest pve-qemu-kvm.


Thanks!

Am 20.12.2012 11:57, schrieb Alexandre DERUMIER:

with last git, I think it's related to balloon driver enabled by default, and 
qmp command send (see my previous mail).


can you try to replace (in QemuServer.pm)

 if (!defined($conf-{balloon}) || $conf-{balloon}) {
 vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024)
 if $conf-{balloon};

 vm_mon_cmd($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);
 }

by

 if (!defined($conf-{balloon}) || $conf-{balloon}) {
 vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024)
 if $conf-{balloon};

 vm_mon_cmd_nocheck($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);
 }


(vm_mon_cmd_nocheck)

- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 20 Décembre 2012 11:48:06
Objet: Re: [pve-devel] migration problems since qemu 1.3

Hi,

Am 20.12.2012 10:04, schrieb Alexandre DERUMIER:

Yes. It works fine with NEWLY started VMs but if the VMs are running
more than 1-3 days. It stops working and the VMs just crahs during
migration.

Maybe vm running since 1-3 days,have more memory used, so I take more time to 
live migrate.


I see totally different outputs - the vm crashes and the status output
stops.

with git from yesterday i'm just getting this:
--
Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203'
(10.255.0.22)
Dec 20 11:34:21 copying disk images
Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203'
Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o
'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock
--migratedfrom cloud1-1202' failed: exit code 255
Dec 20 11:34:23 aborting phase 2 - cleanup resources
Dec 20 11:34:24 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems
--



Does it crash at start of the migration ? or in the middle of the migration ?


At the beginning mostly i see no more output after:
migration listens on port 6



what is your vm conf ? (memory size, storage ?)

2GB mem, RBD / Ceph Storage

Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] get config from an external VM

2012-12-20 Thread Dietmar Maurer
  You can delete the volume on the storage view?
 Oh i didn't knew that. Right now at least for RBD storage the remove button
 is disabled. Is this defined by storage plugin?

It is disabled because the volumes are owned by a VM, so the user should 
add/remove
on the VM config page.

You need to enable that for shared volumes.

  which can be read and modified by all nodes.
  Each storage has a content view - why don't we manage shared volumes
 there?
 Good idea. I like this.
 
 Would you agree with this?
 
 Content view of a specific storage:
 1.) put shared volumes (id 0) under a seperate tree (shared images)

OK

 2.) enable remove/delete button in content view for shared volumes

OK

 3.) add a create shared disk button beside remove

yes, unless we have a better idea ;-)

 4.) add an assign volume to VM button beside remove or should this be
 under VM = Hardware = add = shared disk?

Yes, I think VM = Hardware = add = shared disk is better.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] usr vm_mon_cmd_nocheck for balloon qmp command at vm_start

2012-12-20 Thread Dietmar Maurer
thanks.

I applied a slightly modified fix.

 -Original Message-
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier
 Sent: Donnerstag, 20. Dezember 2012 12:11
 To: pve-devel@pve.proxmox.com
 Subject: [pve-devel] [PATCH] usr vm_mon_cmd_nocheck for balloon qmp
 command at vm_start
 
 fix live migration, as we don't have the vm config file
 
 Signed-off-by: Alexandre Derumier aderum...@odiso.com
 ---
  PVE/QemuServer.pm |4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index
 81a9351..9c64757 100644
 --- a/PVE/QemuServer.pm
 +++ b/PVE/QemuServer.pm
 @@ -3001,9 +3001,9 @@ sub vm_start {
   # fixme: how do we handle that on migration?
 
   if (!defined($conf-{balloon}) || $conf-{balloon}) {
 - vm_mon_cmd($vmid, balloon, value = $conf-
 {balloon}*1024*1024)
 + vm_mon_cmd_nocheck($vmid, balloon, value =
 +$conf-{balloon}*1024*1024)
   if $conf-{balloon};
 - vm_mon_cmd($vmid, 'qom-set',
 + vm_mon_cmd_nocheck($vmid, 'qom-set',
  path = machine/peripheral/balloon0,
  property = stats-polling-interval,
  value = 2);
 --
 1.7.10.4
 
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-20 Thread Alexandre DERUMIER
Ok,thanks, I'll try that this afternoon !



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 14:02:38 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

Hi Alexandre, 

I have finally uploaded an auto-ballooning implementation for pvestatd. 

The algorithm uses the new 'shares' property to distribute RAM accordingly. 

Feel free to test :-) 

- Dietmar 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-server : allow manual ballooning if shares is not defined

2012-12-20 Thread Alexandre Derumier
Allow manual ballooning (qm set --balloon XXX), if shares is not defined or = 0.

if balloon = 0, we set balloon to max_memory

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] allow manual ballooning if shares is not enabled

2012-12-20 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 09ab1e7..9bdbc0a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -897,9 +897,15 @@ __PACKAGE__-register_method({
 
$vmconfig_update_net($rpcenv, $authuser, $conf, $storecfg, 
$vmid, 
  $opt, $param-{$opt});
-
} else {
 
+   if ($opt eq 'balloon'  ((!$conf-{shares}) || 
($conf-{shares}  $conf-{shares} == 0))) { 
+
+   my $balloontarget = $param-{$opt};
+   $balloontarget = $conf-{memory} if $param-{$opt} == 0;
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, balloon, 
value = $balloontarget*1024*1024);
+   }
+
$conf-{$opt} = $param-{$opt};
PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
}
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
i had it again. 
Do you have applied the fix from today about balloning ?
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=95381ce06cea266d40911a7129da6067a1640cbf

I even canot connect anymore through console to this VM. 

mmm, seem that something break qmp on source vm... 
Is the source vm running ? (is ssh working?)




- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 15:27:53 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 

i had it again. 

Migration hangs at: 
Dec 20 15:23:03 starting migration of VM 107 to node 'cloud1-1202' 
(10.255.0.20) 
Dec 20 15:23:03 copying disk images 
Dec 20 15:23:03 starting VM 107 on remote node 'cloud1-1202' 
Dec 20 15:23:06 starting migration tunnel 
Dec 20 15:23:06 starting online/live migration on port 6 

I even canot connect anymore through console to this VM. 

Stefan 

Am 20.12.2012 12:31, schrieb Stefan Priebe - Profihost AG: 
 Hi, 
 
 at least migration works at all ;-) I'll wait until tomorrow and test 
 again. I've restarted all VMs with latest pve-qemu-kvm. 
 
 Thanks! 
 
 Am 20.12.2012 11:57, schrieb Alexandre DERUMIER: 
 with last git, I think it's related to balloon driver enabled by 
 default, and qmp command send (see my previous mail). 
 
 
 can you try to replace (in QemuServer.pm) 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 by 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd_nocheck($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 
 (vm_mon_cmd_nocheck) 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 11:48:06 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 
 Am 20.12.2012 10:04, schrieb Alexandre DERUMIER: 
 Yes. It works fine with NEWLY started VMs but if the VMs are running 
 more than 1-3 days. It stops working and the VMs just crahs during 
 migration. 
 Maybe vm running since 1-3 days,have more memory used, so I take more 
 time to live migrate. 
 
 I see totally different outputs - the vm crashes and the status output 
 stops. 
 
 with git from yesterday i'm just getting this: 
 -- 
 Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203' 
 (10.255.0.22) 
 Dec 20 11:34:21 copying disk images 
 Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203' 
 Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o 
 'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock 
 --migratedfrom cloud1-1202' failed: exit code 255 
 Dec 20 11:34:23 aborting phase 2 - cleanup resources 
 Dec 20 11:34:24 ERROR: migration finished with problems (duration 
 00:00:03) 
 TASK ERROR: migration problems 
 -- 
 
 
 Does it crash at start of the migration ? or in the middle of the 
 migration ? 
 
 At the beginning mostly i see no more output after: 
 migration listens on port 6 
 
 
 what is your vm conf ? (memory size, storage ?) 
 2GB mem, RBD / Ceph Storage 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Stefan Priebe - Profihost AG

Hi,
Am 20.12.2012 15:49, schrieb Alexandre DERUMIER:

i had it again.

Do you have applied the fix from today about balloning ?
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=95381ce06cea266d40911a7129da6067a1640cbf


Yes.


I even canot connect anymore through console to this VM.


mmm, seem that something break qmp on source vm...
Is the source vm running ? (is ssh working?)
It is marked as running the kvm process is still there. But no service 
is running anymore - so i cannot even connect via ssh anymore.


Stefan


- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 20 Décembre 2012 15:27:53
Objet: Re: [pve-devel] migration problems since qemu 1.3

Hi,

i had it again.

Migration hangs at:
Dec 20 15:23:03 starting migration of VM 107 to node 'cloud1-1202'
(10.255.0.20)
Dec 20 15:23:03 copying disk images
Dec 20 15:23:03 starting VM 107 on remote node 'cloud1-1202'
Dec 20 15:23:06 starting migration tunnel
Dec 20 15:23:06 starting online/live migration on port 6

I even canot connect anymore through console to this VM.

Stefan

Am 20.12.2012 12:31, schrieb Stefan Priebe - Profihost AG:

Hi,

at least migration works at all ;-) I'll wait until tomorrow and test
again. I've restarted all VMs with latest pve-qemu-kvm.

Thanks!

Am 20.12.2012 11:57, schrieb Alexandre DERUMIER:

with last git, I think it's related to balloon driver enabled by
default, and qmp command send (see my previous mail).


can you try to replace (in QemuServer.pm)

if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd($vmid, balloon, value =
$conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);
}

by

if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd_nocheck($vmid, balloon, value =
$conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd_nocheck($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);
}


(vm_mon_cmd_nocheck)

- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 20 Décembre 2012 11:48:06
Objet: Re: [pve-devel] migration problems since qemu 1.3

Hi,

Am 20.12.2012 10:04, schrieb Alexandre DERUMIER:

Yes. It works fine with NEWLY started VMs but if the VMs are running
more than 1-3 days. It stops working and the VMs just crahs during
migration.

Maybe vm running since 1-3 days,have more memory used, so I take more
time to live migrate.


I see totally different outputs - the vm crashes and the status output
stops.

with git from yesterday i'm just getting this:
--
Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203'
(10.255.0.22)
Dec 20 11:34:21 copying disk images
Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203'
Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o
'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock
--migratedfrom cloud1-1202' failed: exit code 255
Dec 20 11:34:23 aborting phase 2 - cleanup resources
Dec 20 11:34:24 ERROR: migration finished with problems (duration
00:00:03)
TASK ERROR: migration problems
--



Does it crash at start of the migration ? or in the middle of the
migration ?


At the beginning mostly i see no more output after:
migration listens on port 6



what is your vm conf ? (memory size, storage ?)

2GB mem, RBD / Ceph Storage

Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
Just an idea (not sure it's the problem),can you try to commment

$qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');

in QemuServer.pm, line 2081.

and restart pvedaemon  pvestatd ?



- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 15:50:38 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 
Am 20.12.2012 15:49, schrieb Alexandre DERUMIER: 
 i had it again. 
 Do you have applied the fix from today about balloning ? 
 https://git.proxmox.com/?p=qemu-server.git;a=commit;h=95381ce06cea266d40911a7129da6067a1640cbf
  

Yes. 

 I even canot connect anymore through console to this VM. 
 
 mmm, seem that something break qmp on source vm... 
 Is the source vm running ? (is ssh working?) 
It is marked as running the kvm process is still there. But no service 
is running anymore - so i cannot even connect via ssh anymore. 

Stefan 

 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 15:27:53 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 
 i had it again. 
 
 Migration hangs at: 
 Dec 20 15:23:03 starting migration of VM 107 to node 'cloud1-1202' 
 (10.255.0.20) 
 Dec 20 15:23:03 copying disk images 
 Dec 20 15:23:03 starting VM 107 on remote node 'cloud1-1202' 
 Dec 20 15:23:06 starting migration tunnel 
 Dec 20 15:23:06 starting online/live migration on port 6 
 
 I even canot connect anymore through console to this VM. 
 
 Stefan 
 
 Am 20.12.2012 12:31, schrieb Stefan Priebe - Profihost AG: 
 Hi, 
 
 at least migration works at all ;-) I'll wait until tomorrow and test 
 again. I've restarted all VMs with latest pve-qemu-kvm. 
 
 Thanks! 
 
 Am 20.12.2012 11:57, schrieb Alexandre DERUMIER: 
 with last git, I think it's related to balloon driver enabled by 
 default, and qmp command send (see my previous mail). 
 
 
 can you try to replace (in QemuServer.pm) 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 by 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd_nocheck($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 
 (vm_mon_cmd_nocheck) 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 11:48:06 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 
 Am 20.12.2012 10:04, schrieb Alexandre DERUMIER: 
 Yes. It works fine with NEWLY started VMs but if the VMs are running 
 more than 1-3 days. It stops working and the VMs just crahs during 
 migration. 
 Maybe vm running since 1-3 days,have more memory used, so I take more 
 time to live migrate. 
 
 I see totally different outputs - the vm crashes and the status output 
 stops. 
 
 with git from yesterday i'm just getting this: 
 -- 
 Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203' 
 (10.255.0.22) 
 Dec 20 11:34:21 copying disk images 
 Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203' 
 Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o 
 'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock 
 --migratedfrom cloud1-1202' failed: exit code 255 
 Dec 20 11:34:23 aborting phase 2 - cleanup resources 
 Dec 20 11:34:24 ERROR: migration finished with problems (duration 
 00:00:03) 
 TASK ERROR: migration problems 
 -- 
 
 
 Does it crash at start of the migration ? or in the middle of the 
 migration ? 
 
 At the beginning mostly i see no more output after: 
 migration listens on port 6 
 
 
 what is your vm conf ? (memory size, storage ?) 
 2GB mem, RBD / Ceph Storage 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Stefan Priebe - Profihost AG

Hi,
Am 20.12.2012 15:57, schrieb Alexandre DERUMIER:

Just an idea (not sure it's the problem),can you try to commment

$qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');

in QemuServer.pm, line 2081.

and restart pvedaemon  pvestatd ?


This doesn't change anything.

Right now the kvm process is running on old and new machine.

An strace on the pid on the new machine shows a loop of:


[pid 28351] ... futex resumed )   = -1 ETIMEDOUT (Connection timed 
out)

[pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0
[pid 28351] futex(0x7ff8b8026024, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11801, {1356016143, 
843092000},  unfinished ...
[pid 28285] mremap(0x7ff77bfe4000, 160378880, 160411648, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160411648, 160448512, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160448512, 160481280, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160481280, 160514048, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160514048, 160546816, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160546816, 160583680, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160583680, 160616448, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160616448, 160649216, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160649216, 160681984, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160681984, 160718848, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160718848, 160751616, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160751616, 160784384, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160784384, 160817152, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160817152, 160854016, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160854016, 160886784, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160886784, 160919552, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160919552, 160952320, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160952320, 160989184, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 160989184, 161021952, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161021952, 161054720, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161054720, 161087488, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161087488, 161124352, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161124352, 161157120, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161157120, 161189888, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161189888, 161222656, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161222656, 161259520, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161259520, 161292288, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161292288, 161325056, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28351] ... futex resumed )   = -1 ETIMEDOUT (Connection timed 
out)

[pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0
[pid 28351] futex(0x7ff8b8026024, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11803, {1356016144, 
843283000},  unfinished ...
[pid 28285] mremap(0x7ff77bfe4000, 161325056, 161357824, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161357824, 161394688, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161394688, 161427456, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161427456, 161460224, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28345] ... restart_syscall resumed ) = -1 ETIMEDOUT (Connection 
timed out)
[pid 28345] futex(0x7ff8caa2e274, FUTEX_CMP_REQUEUE_PRIVATE, 1, 
2147483647, 0x7ff8caa2e1b0, 872) = 1

[pid 28347] ... futex resumed )   = 0
[pid 28345] futex(0x7ff8caa241a8, FUTEX_WAKE_PRIVATE, 1 unfinished ...
[pid 28347] futex(0x7ff8caa2e1b0, FUTEX_WAKE_PRIVATE, 1 unfinished ...
[pid 28345] ... futex resumed )   = 0
[pid 28347] ... futex resumed )   = 0
[pid 28345] futex(0x7ff8caa2420c, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 799, {1356016153, 
954319000},  unfinished ...
[pid 28347] sendmsg(19, {msg_name(0)=NULL, msg_iov(1)=[{\t, 1}], 
msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 1
[pid 28347] futex(0x7ff8caa2e274, FUTEX_WAIT_PRIVATE, 873, NULL 
unfinished ...
[pid 28285] mremap(0x7ff77bfe4000, 161460224, 161492992, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161492992, 161529856, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161529856, 161562624, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000
[pid 28285] mremap(0x7ff77bfe4000, 161562624, 

Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
Hi Stefan, any news ?

I'm trying to reproduce your problem, but it's works fine for me, no crash...

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 16:09:42 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 
Am 20.12.2012 15:57, schrieb Alexandre DERUMIER: 
 Just an idea (not sure it's the problem),can you try to commment 
 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 in QemuServer.pm, line 2081. 
 
 and restart pvedaemon  pvestatd ? 

This doesn't change anything. 

Right now the kvm process is running on old and new machine. 

An strace on the pid on the new machine shows a loop of: 

 
[pid 28351] ... futex resumed ) = -1 ETIMEDOUT (Connection timed 
out) 
[pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0 
[pid 28351] futex(0x7ff8b8026024, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11801, {1356016143, 
843092000},  unfinished ... 
[pid 28285] mremap(0x7ff77bfe4000, 160378880, 160411648, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160411648, 160448512, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160448512, 160481280, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160481280, 160514048, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160514048, 160546816, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160546816, 160583680, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160583680, 160616448, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160616448, 160649216, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160649216, 160681984, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160681984, 160718848, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160718848, 160751616, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160751616, 160784384, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160784384, 160817152, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160817152, 160854016, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160854016, 160886784, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160886784, 160919552, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160919552, 160952320, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160952320, 160989184, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160989184, 161021952, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161021952, 161054720, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161054720, 161087488, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161087488, 161124352, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161124352, 161157120, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161157120, 161189888, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161189888, 161222656, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161222656, 161259520, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161259520, 161292288, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161292288, 161325056, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28351] ... futex resumed ) = -1 ETIMEDOUT (Connection timed 
out) 
[pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0 
[pid 28351] futex(0x7ff8b8026024, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11803, {1356016144, 
843283000},  unfinished ... 
[pid 28285] mremap(0x7ff77bfe4000, 161325056, 161357824, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161357824, 161394688, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161394688, 161427456, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161427456, 161460224, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28345] ... restart_syscall resumed ) = -1 ETIMEDOUT (Connection 
timed out) 
[pid 28345] futex(0x7ff8caa2e274, FUTEX_CMP_REQUEUE_PRIVATE, 1, 
2147483647, 0x7ff8caa2e1b0, 872) = 1 
[pid 28347] ... futex resumed ) = 0 
[pid 28345] futex(0x7ff8caa241a8, FUTEX_WAKE_PRIVATE, 1 unfinished ... 
[pid 28347] futex(0x7ff8caa2e1b0, FUTEX_WAKE_PRIVATE, 1 unfinished ... 
[pid 28345] ... futex resumed ) = 0 
[pid 28347] ... futex resumed ) = 0 
[pid 28345] futex(0x7ff8caa2420c, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 799, {1356016153, 
954319000},  unfinished ... 
[pid 28347] sendmsg(19, {msg_name(0)=NULL, msg_iov(1)=[{\t, 1}], 
msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 1 
[pid 28347]