Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Stefan Priebe

Hi,

the difference is in sub vm_start in QemuServer.pm. After the kvm 
process is started PVE sends some human monitor commands.


If i comment this line:
#   eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); };


Everything works fine and everything is OK again.

I also get this in my logs but i checked $migrate_downtime is 1 so it IS 
a NUMBER:


Dec 26 13:09:27 cloud1-1202 qm[8726]: VM 105 qmp command failed - VM 105 
qmp command 'migrate_set_downtime' failed - Invalid parameter type for 
'value', expected: number


Stefan
Am 26.12.2012 07:45, schrieb Alexandre DERUMIER:

I can even start it with daemonize from shell. Migration works fine. It
just doesn't work when started from PVE.

This is crazy  I don't see any difference from starting it from shell or 
from pve

And if your remove the balloon device, migration is 100% working, starting from 
pve ?

just to be sure, can you try to do info balloon from human monitor console ? 
(I would like to see if the balloon driver is correctly working)


- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 25 Décembre 2012 10:05:10
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3

I can even start it with daemonize from shell. Migration works fine. It
just doesn't work when started from PVE.

Stefan

Am 24.12.2012 15:48, schrieb Alexandre DERUMIER:

does it work if you keep device virtio-balloon enabled,

but comment in qemuserver.pm

line 3005
vm_mon_cmd_nocheck($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);


and

line2081
$qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');

?

- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:38:13
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3

maybe it's related to qmp queries to balloon driver (for stats) during 
migration ?



- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Dietmar Maurer diet...@proxmox.com
Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:32:52
Objet: Baloon Device is the problem! Re: [pve-devel] migration problems since 
qemu 1.3

Hello,

it works fine / again if / when i remove the baloon pci device.

If i remove this line everything is fine again!
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3

Greets
Stefan
Am 24.12.2012 15:05, schrieb Stefan Priebe:


Am 24.12.2012 14:08, schrieb Dietmar Maurer:

virtio0:
cephkvmpool1:vm-105-disk-
1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G


Please can you also test without ceph?


The same. I now also tried a debian netboot cd (6.0.5) but then 32bit
doesn't work too. I had no disks attached at all.

I filled the tmpfs ramdisk under /dev with
dd if=/dev/urandom of=/dev/myfile bs=1M count=900

Greets,
Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Alexandre DERUMIER
I see that Dietmar have change recently
vm_mon_cmd($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

to

vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

https://git.proxmox.com/?p=qemu-server.git;a=blobdiff;f=PVE/QemuServer.pm;h=165eaf6be6e5fe4b1c88454d28b113bc2b1f20af;hp=81a935176aca16e013fd6987f2ddbc72260092cf;hb=95381ce06cea266d40911a7129da6067a1640cbf;hpb=4bdb05142cfcef09495a45ffb256955f7b947caa


so maybe before, the migrate_set_downtime was not apply (because the vm_mon_cmd 
check if the vm config file exist on the target).

Do you have any migrate_downtime parameters in your vm config 
Because it shouldn't be sent if not

my $migrate_downtime = $defaults-{migrate_downtime};
$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
if (defined($migrate_downtime)) {
eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime); };
}

...

- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 13:18:33 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

Hi, 

the difference is in sub vm_start in QemuServer.pm. After the kvm 
process is started PVE sends some human monitor commands. 

If i comment this line: 
# eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

Everything works fine and everything is OK again. 

I also get this in my logs but i checked $migrate_downtime is 1 so it IS 
a NUMBER: 

Dec 26 13:09:27 cloud1-1202 qm[8726]: VM 105 qmp command failed - VM 105 
qmp command 'migrate_set_downtime' failed - Invalid parameter type for 
'value', expected: number 

Stefan 
Am 26.12.2012 07:45, schrieb Alexandre DERUMIER: 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 This is crazy  I don't see any difference from starting it from shell or 
 from pve 
 
 And if your remove the balloon device, migration is 100% working, starting 
 from pve ? 
 
 just to be sure, can you try to do info balloon from human monitor console 
 ? (I would like to see if the balloon driver is correctly working) 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Mardi 25 Décembre 2012 10:05:10 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 
 Stefan 
 
 Am 24.12.2012 15:48, schrieb Alexandre DERUMIER: 
 does it work if you keep device virtio-balloon enabled, 
 
 but comment in qemuserver.pm 
 
 line 3005 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 
 
 and 
 
 line2081 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 ? 
 
 - Mail original - 
 
 De: Alexandre DERUMIER aderum...@odiso.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:38:13 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 maybe it's related to qmp queries to balloon driver (for stats) during 
 migration ? 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:32:52 
 Objet: Baloon Device is the problem! Re: [pve-devel] migration problems 
 since qemu 1.3 
 
 Hello, 
 
 it works fine / again if / when i remove the baloon pci device. 
 
 If i remove this line everything is fine again! 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 
 Greets 
 Stefan 
 Am 24.12.2012 15:05, schrieb Stefan Priebe: 
 
 Am 24.12.2012 14:08, schrieb Dietmar Maurer: 
 virtio0: 
 cephkvmpool1:vm-105-disk- 
 1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G 
 
 Please can you also test without ceph? 
 
 The same. I now also tried a debian netboot cd (6.0.5) but then 32bit 
 doesn't work too. I had no disks attached at all. 
 
 I filled the tmpfs ramdisk under /dev with 
 dd if=/dev/urandom of=/dev/myfile bs=1M count=900 
 
 Greets, 
 Stefan 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Alexandre DERUMIER
the migrate_downtime = 1, come from QemuServer.pm


migrate_downtime = {
optional = 1,
type = 'integer',
description = Set maximum tolerated downtime (in seconds) for 
migrations.,
minimum = 0,
default = 1,  DEFAULT VALUE
},


I don't know if we really need a default value, because it's always setting 
migrate_downtime to 1. 


Now, I don't know what really happen to you, because recent changes can set 
migrate_downtime to the target vm (vm_mon_cmd_nocheck) 
But I don't think it's doing something because the migrate_downtime should be 
done one sourcevm.

Can you try to replace vm_mon_cmd_nocheck by vm_mon_cmd ? (So it should works 
only at vm_start but not when live migrate occur on target vm)


also migrate_downtime, should be set on sourcevm before the migration begin 
(QemuMigrate.pm). I don't know why we are setting it at vm start.


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 13:18:33 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

Hi, 

the difference is in sub vm_start in QemuServer.pm. After the kvm 
process is started PVE sends some human monitor commands. 

If i comment this line: 
# eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

Everything works fine and everything is OK again. 

I also get this in my logs but i checked $migrate_downtime is 1 so it IS 
a NUMBER: 

Dec 26 13:09:27 cloud1-1202 qm[8726]: VM 105 qmp command failed - VM 105 
qmp command 'migrate_set_downtime' failed - Invalid parameter type for 
'value', expected: number 

Stefan 
Am 26.12.2012 07:45, schrieb Alexandre DERUMIER: 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 This is crazy  I don't see any difference from starting it from shell or 
 from pve 
 
 And if your remove the balloon device, migration is 100% working, starting 
 from pve ? 
 
 just to be sure, can you try to do info balloon from human monitor console 
 ? (I would like to see if the balloon driver is correctly working) 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Mardi 25 Décembre 2012 10:05:10 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 
 Stefan 
 
 Am 24.12.2012 15:48, schrieb Alexandre DERUMIER: 
 does it work if you keep device virtio-balloon enabled, 
 
 but comment in qemuserver.pm 
 
 line 3005 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 
 
 and 
 
 line2081 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 ? 
 
 - Mail original - 
 
 De: Alexandre DERUMIER aderum...@odiso.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:38:13 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 maybe it's related to qmp queries to balloon driver (for stats) during 
 migration ? 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:32:52 
 Objet: Baloon Device is the problem! Re: [pve-devel] migration problems 
 since qemu 1.3 
 
 Hello, 
 
 it works fine / again if / when i remove the baloon pci device. 
 
 If i remove this line everything is fine again! 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 
 Greets 
 Stefan 
 Am 24.12.2012 15:05, schrieb Stefan Priebe: 
 
 Am 24.12.2012 14:08, schrieb Dietmar Maurer: 
 virtio0: 
 cephkvmpool1:vm-105-disk- 
 1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G 
 
 Please can you also test without ceph? 
 
 The same. I now also tried a debian netboot cd (6.0.5) but then 32bit 
 doesn't work too. I had no disks attached at all. 
 
 I filled the tmpfs ramdisk under /dev with 
 dd if=/dev/urandom of=/dev/myfile bs=1M count=900 
 
 Greets, 
 Stefan 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Stefan Priebe

Hi,

Am 26.12.2012 17:40, schrieb Alexandre DERUMIER:

I don't know if we really need a default value, because it's always setting 
migrate_downtime to 1.
It also isn't accepted you get the answer back that 1 isn't a number. 
Don't know what format a number needs?



Now, I don't know what really happen to you, because recent changes can set 
migrate_downtime to the target vm (vm_mon_cmd_nocheck)
But I don't think it's doing something because the migrate_downtime should be 
done one sourcevm.
You get the error message that 1 isn't a number. If i get this message 
migration fails after.



Can you try to replace vm_mon_cmd_nocheck by vm_mon_cmd ? (So it should works 
only at vm_start but not when live migrate occur on target vm)

Done - works see my other post.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Alexandre DERUMIER
It also isn't accepted you get the answer back that 1 isn't a number. 
Don't know what format a number needs? 

the default migrate_downtime is 30ms by default (if we doesn't send qmp 
command).
I think we set 1 sec by default, because of infinite migration (30ms was too 
short in past with high memory change workload).
I see that last migration code from qemu git (1.4), seem to improve a lot the 
downtime (from 500 - 30ms) with high memory change workload.
Don't know if qemu 1.3 works fine without setting downtime to 1sec. 

I think we need to cast the value as int for the json

 vm_mon_cmd($vmid, migrate_set_downtime, value = $migrate_downtime);

-

 vm_mon_cmd($vmid, migrate_set_downtime, value = int($migrate_downtime));


I remember same problem with qemu_block_set_io_throttle()
 vm_mon_cmd($vmid, block_set_io_throttle, device = $deviceid, bps = 
int($bps), bps_rd = int($bps_rd), bps_wr = int($bps_wr), iops = int($iops), 
iops_rd = int($iops_rd), iops_wr = int($iops_wr));

So maybe does it send crap if the value is not casted ?  


Also the value should not be int but float, qmp doc said that we can use 0.5, 
0.30, as value.



also query-migrate returns some new 2 cools values about downtime, I think we 
should display them in query migrate log

- downtime: only present when migration has finished correctly
  total amount in ms for downtime that happened (json-int)
- expected-downtime: only present while migration is active
total amount in ms for downtime that was calculated on
the last bitmap round (json-int)

- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 20:52:56 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

Hi, 

Am 26.12.2012 17:40, schrieb Alexandre DERUMIER: 
 I don't know if we really need a default value, because it's always setting 
 migrate_downtime to 1. 
It also isn't accepted you get the answer back that 1 isn't a number. 
Don't know what format a number needs? 

 Now, I don't know what really happen to you, because recent changes can set 
 migrate_downtime to the target vm (vm_mon_cmd_nocheck) 
 But I don't think it's doing something because the migrate_downtime should be 
 done one sourcevm. 
You get the error message that 1 isn't a number. If i get this message 
migration fails after. 

 Can you try to replace vm_mon_cmd_nocheck by vm_mon_cmd ? (So it should works 
 only at vm_start but not when live migrate occur on target vm) 
Done - works see my other post. 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Dietmar Maurer
 I remember same problem with qemu_block_set_io_throttle()
 vm_mon_cmd($vmid, block_set_io_throttle, device = $deviceid, bps =
 int($bps), bps_rd = int($bps_rd), bps_wr = int($bps_wr), iops = int($iops),
 iops_rd = int($iops_rd), iops_wr = int($iops_wr));
 
 So maybe does it send crap if the value is not casted ?

No, it send a string value instead (value = 1).

 Also the value should not be int but float, qmp doc said that we can use 0.5,
 0.30, as value.

honestly, I am glad if migration work at all ;-) What is the use case of 
setting it
to 0.5 or 0.3?

Note: The current time estimation is wrong anyways, and it will always be  a 
rough
estimation.
   
 also query-migrate returns some new 2 cools values about downtime, I think
 we should display them in query migrate log
 
 - downtime: only present when migration has finished correctly
   total amount in ms for downtime that happened (json-int)
 - expected-downtime: only present while migration is active
 total amount in ms for downtime that was calculated on
   the last bitmap round (json-int)

Yes, that sounds interesting.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-25 Thread Stefan Priebe

Hi,

OK now it's getting crazy. When i execute the kvm command manually from 
shell without -daemonize it works too ?!?


Stefan

Am 24.12.2012 15:48, schrieb Alexandre DERUMIER:

does it work if you keep device virtio-balloon enabled,

but comment in qemuserver.pm

line 3005
vm_mon_cmd_nocheck($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);


and

line2081
  $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');

?

- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:38:13
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3

maybe it's related to qmp queries to balloon driver (for stats) during 
migration ?



- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Dietmar Maurer diet...@proxmox.com
Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:32:52
Objet: Baloon Device is the problem! Re: [pve-devel] migration problems since 
qemu 1.3

Hello,

it works fine / again if / when i remove the baloon pci device.

If i remove this line everything is fine again!
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3

Greets
Stefan
Am 24.12.2012 15:05, schrieb Stefan Priebe:


Am 24.12.2012 14:08, schrieb Dietmar Maurer:

virtio0:
cephkvmpool1:vm-105-disk-
1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G


Please can you also test without ceph?


The same. I now also tried a debian netboot cd (6.0.5) but then 32bit
doesn't work too. I had no disks attached at all.

I filled the tmpfs ramdisk under /dev with
dd if=/dev/urandom of=/dev/myfile bs=1M count=900

Greets,
Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-25 Thread Stefan Priebe
I can even start it with daemonize from shell. Migration works fine. It 
just doesn't work when started from PVE.


Stefan

Am 24.12.2012 15:48, schrieb Alexandre DERUMIER:

does it work if you keep device virtio-balloon enabled,

but comment in qemuserver.pm

line 3005
vm_mon_cmd_nocheck($vmid, 'qom-set',
path = machine/peripheral/balloon0,
property = stats-polling-interval,
value = 2);


and

line2081
  $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');

?

- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe s.pri...@profihost.ag
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:38:13
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3

maybe it's related to qmp queries to balloon driver (for stats) during 
migration ?



- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Dietmar Maurer diet...@proxmox.com
Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Lundi 24 Décembre 2012 15:32:52
Objet: Baloon Device is the problem! Re: [pve-devel] migration problems since 
qemu 1.3

Hello,

it works fine / again if / when i remove the baloon pci device.

If i remove this line everything is fine again!
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3

Greets
Stefan
Am 24.12.2012 15:05, schrieb Stefan Priebe:


Am 24.12.2012 14:08, schrieb Dietmar Maurer:

virtio0:
cephkvmpool1:vm-105-disk-
1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G


Please can you also test without ceph?


The same. I now also tried a debian netboot cd (6.0.5) but then 32bit
doesn't work too. I had no disks attached at all.

I filled the tmpfs ramdisk under /dev with
dd if=/dev/urandom of=/dev/myfile bs=1M count=900

Greets,
Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-25 Thread Alexandre DERUMIER
I can even start it with daemonize from shell. Migration works fine. It 
just doesn't work when started from PVE. 
This is crazy  I don't see any difference from starting it from shell or 
from pve

And if your remove the balloon device, migration is 100% working, starting from 
pve ?

just to be sure, can you try to do info balloon from human monitor console ? 
(I would like to see if the balloon driver is correctly working)


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 25 Décembre 2012 10:05:10 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

I can even start it with daemonize from shell. Migration works fine. It 
just doesn't work when started from PVE. 

Stefan 

Am 24.12.2012 15:48, schrieb Alexandre DERUMIER: 
 does it work if you keep device virtio-balloon enabled, 
 
 but comment in qemuserver.pm 
 
 line 3005 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 
 
 and 
 
 line2081 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 ? 
 
 - Mail original - 
 
 De: Alexandre DERUMIER aderum...@odiso.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:38:13 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 maybe it's related to qmp queries to balloon driver (for stats) during 
 migration ? 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:32:52 
 Objet: Baloon Device is the problem! Re: [pve-devel] migration problems since 
 qemu 1.3 
 
 Hello, 
 
 it works fine / again if / when i remove the baloon pci device. 
 
 If i remove this line everything is fine again! 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 
 Greets 
 Stefan 
 Am 24.12.2012 15:05, schrieb Stefan Priebe: 
 
 Am 24.12.2012 14:08, schrieb Dietmar Maurer: 
 virtio0: 
 cephkvmpool1:vm-105-disk- 
 1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G 
 
 Please can you also test without ceph? 
 
 The same. I now also tried a debian netboot cd (6.0.5) but then 32bit 
 doesn't work too. I had no disks attached at all. 
 
 I filled the tmpfs ramdisk under /dev with 
 dd if=/dev/urandom of=/dev/myfile bs=1M count=900 
 
 Greets, 
 Stefan 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel