Re: [pve-devel] pve-manager : expose balloon to gui

2012-12-18 Thread Alexandre DERUMIER
What guest OS (windows version) and what binaries virtio drivers do you use 
exactly?
I have tested it with win2003 R2 SP2 X64, with virtio build48 
(http://people.redhat.com/vrozenfe/build-48/)

ballon driver install:

devcon install BALLOON.inf PCI\VEN_1AF4DEV_1002SUBSYS_00051AF4REV_00

then install the service: (as administrator)
blnsrv.exe --install

and check that the balloon service is running.


that's all




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 14:47:47 
Objet: RE: [pve-devel] pve-manager : expose balloon to gui 

What guest OS (windows version) and what binaries virtio drivers do you use 
exactly? 

I just tested with win2008r2, and always get 

 timer hasn't been enabled or guest doesn't support 'stat-free-memory' 

I also run 'blnsrv -i' inside the guest, but still can't get usable values. Any 
ideas? 


 -Original Message- 
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
 Sent: Montag, 10. Dezember 2012 00:03 
 To: Dietmar Maurer 
 Cc: pve-devel@pve.proxmox.com 
 Subject: Re: [pve-devel] pve-manager : expose balloon to gui 
 
 Are you sure ? because source code seem to get all balloon stats 
 (VIRTIO_BALLOON_S_SWAP_IN,VIRTIO_BALLOON_S_SWAP_OUT,VIRTIO_B 
 ALLOON_S_M 
 EMFREE...) 
 https://github.com/YanVugenfirer/kvm-guest-drivers- 
 windows/blob/master 
 /Balloon/app/memstat.cpp 
 
 I'll restest it. 
 
 
 I have just tested it, using last ballon driver,it's working fine for me. (I 
 check 
 values in perfmon, they are same) 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats- 
 polling-interval 
 {u'return': 10} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last- 
 update 
 {u'return': 1355091093} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free- 
 memory 
 {u'return': 1263800320} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total- 
 memory 
 {u'return': 2096644096} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-in 
 {u'return': 0} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap- 
 out 
 {u'return': 0} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-major- 
 faults 
 {u'return': 4300} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-minor- 
 faults 
 {u'return': 19507} 
 
 
 (But It can be easily extended to get some other counters,it's just 
 using a wmi query to get perf counters) 
 
 buffer cache size can be retreive with : System Cache Resident Bytes wmi 
 counter 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager : expose balloon to gui

2012-12-18 Thread Alexandre DERUMIER
I'll do some tests with win2008.

(I was also sending stats-polling-interval each second with pvestatd because I 
didn't work at start, but I think you have corrected that)


Does manual balloning work ? Is it only  a stats problem ?


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 15:22:55 
Objet: RE: [pve-devel] pve-manager : expose balloon to gui 

 What guest OS (windows version) and what binaries virtio drivers do you 
 use exactly? 
 I have tested it with win2003 R2 SP2 X64, with virtio build48 
 (http://people.redhat.com/vrozenfe/build-48/) 
 
 ballon driver install: 
 
 devcon install BALLOON.inf 
 PCI\VEN_1AF4DEV_1002SUBSYS_00051AF4REV_00 
 
 then install the service: (as administrator) blnsrv.exe --install 
 
 and check that the balloon service is running. 

I tested with w2008r2 and win8 - but I can't get that to work. 

balloon driver is loaded, and blnsrv is running - but no stats :-/ 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager : expose balloon to gui

2012-12-18 Thread Alexandre DERUMIER
I have just redone tests on my win2003, with last pve-qemu-kvm, it doesn't work 
anymore 

(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 0}
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=10
{u'return': {}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 10}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last-update
{u'return': 1355842494}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-total-memory'}}
(QEMU) 


...

it was working fine with qemu 1.3 + 2 firsts balloon stats patchs from qemu 
mailing.

Maybe something have changed ?



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 15:31:50 
Objet: Re: [pve-devel] pve-manager : expose balloon to gui 

I'll do some tests with win2008. 

(I was also sending stats-polling-interval each second with pvestatd because I 
didn't work at start, but I think you have corrected that) 


Does manual balloning work ? Is it only a stats problem ? 


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 15:22:55 
Objet: RE: [pve-devel] pve-manager : expose balloon to gui 

 What guest OS (windows version) and what binaries virtio drivers do you 
 use exactly? 
 I have tested it with win2003 R2 SP2 X64, with virtio build48 
 (http://people.redhat.com/vrozenfe/build-48/) 
 
 ballon driver install: 
 
 devcon install BALLOON.inf 
 PCI\VEN_1AF4DEV_1002SUBSYS_00051AF4REV_00 
 
 then install the service: (as administrator) blnsrv.exe --install 
 
 and check that the balloon service is running. 

I tested with w2008r2 and win8 - but I can't get that to work. 

balloon driver is loaded, and blnsrv is running - but no stats :-/ 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager : expose balloon to gui

2012-12-18 Thread Alexandre DERUMIER
oh, I found my ballon devices twice in windows(don't know if it's because 
of an upgrade of the balloon code...)

I have delete both, then restart, windows detect it and reinstall it cleanly,

and now it's working

(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 0}
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=10
{u'return': {}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 10}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'return': 2147483647}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total-memory
{u'return': 2147483647}



I'll do test with win2008R2 tomorrow



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 15:57:54 
Objet: Re: [pve-devel] pve-manager : expose balloon to gui 

I have just redone tests on my win2003, with last pve-qemu-kvm, it doesn't work 
anymore  

(QEMU) qom-get path=/machine/peripheral/balloon0 
property=stats-polling-interval 
{u'return': 0} 
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=10 
{u'return': {}} 
(QEMU) qom-get path=/machine/peripheral/balloon0 
property=stats-polling-interval 
{u'return': 10} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory 
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last-update 
{u'return': 1355842494} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total-memory 
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-total-memory'}} 
(QEMU) 


... 

it was working fine with qemu 1.3 + 2 firsts balloon stats patchs from qemu 
mailing. 

Maybe something have changed ? 



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 15:31:50 
Objet: Re: [pve-devel] pve-manager : expose balloon to gui 

I'll do some tests with win2008. 

(I was also sending stats-polling-interval each second with pvestatd because I 
didn't work at start, but I think you have corrected that) 


Does manual balloning work ? Is it only a stats problem ? 


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 15:22:55 
Objet: RE: [pve-devel] pve-manager : expose balloon to gui 

 What guest OS (windows version) and what binaries virtio drivers do you 
 use exactly? 
 I have tested it with win2003 R2 SP2 X64, with virtio build48 
 (http://people.redhat.com/vrozenfe/build-48/) 
 
 ballon driver install: 
 
 devcon install BALLOON.inf 
 PCI\VEN_1AF4DEV_1002SUBSYS_00051AF4REV_00 
 
 then install the service: (as administrator) blnsrv.exe --install 
 
 and check that the balloon service is running. 

I tested with w2008r2 and win8 - but I can't get that to work. 

balloon driver is loaded, and blnsrv is running - but no stats :-/ 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager : expose balloon to gui

2012-12-18 Thread Alexandre DERUMIER
And you still run w2003?
yes. (and it's works with last pve-qemu-kvm git, no need to revert)


 Can you try with w2008r2 one win8?

I'll try win2008R2 today (don't have image of win8 yet)
 





Alexandre D e rumier 

Ingénieur Systèmes et Réseaux 


Fixe : 03 20 68 88 85 

Fax : 03 20 68 90 88 


45 Bvd du Général Leclerc 59100 Roubaix 
12 rue Marivaux 75002 Paris 


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 18 Décembre 2012 18:01:10 
Objet: RE: [pve-devel] pve-manager : expose balloon to gui 

 oh, I found my ballon devices twice in windows(don't know if it's because 
 of an upgrade of the balloon code...) 
 
 I have delete both, then restart, windows detect it and reinstall it cleanly, 
 
 and now it's working 

And you still run w2003? Can you try with w2008r2 one win8? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] balloon stats on win2008r2 working

2012-12-19 Thread Alexandre DERUMIER
- I have just installed a win2008R2 (X64)

install ballon driver

-go to hardware devices management, find pci device without driver.
 - update driver-choose disk

  then using 0.48 virtio driver, choose the whl directory



then install service (I put files in c:\program files\balloon)

#cd c:\program files\balloon
#blnsrv --install

#reboot

then stats works

(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 0}
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=10
{u'return': 10}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 10}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last-update
{u'return': 1355914831}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last-update
{u'return': 1355914841}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'return': 1765306368}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total-memory
{u'return': 2147074048}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-in
{u'return': 2440}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-out
{u'return': 0}
(QEMU) 


I didn't have tested it with setting polling interval at start,
I'll try with your last qemu-serve today

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] balloon stats on win2008r2 working

2012-12-19 Thread Alexandre DERUMIER
,
seem that windows ballon driver doesn't get stats-polling-interval at start, 
and sometimes also when running...

vm start with polling interval 2:

(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 2}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}


(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=2
{u'return': {}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=2
{u'return': {}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'return': 1770397696}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'return': 1770397696}


...

- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 19 Décembre 2012 12:05:06 
Objet: [pve-devel] balloon stats on win2008r2 working 

- I have just installed a win2008R2 (X64) 

install ballon driver 

-go to hardware devices management, find pci device without driver. 
- update driver-choose disk 

then using 0.48 virtio driver, choose the whl directory 



then install service (I put files in c:\program files\balloon) 

#cd c:\program files\balloon 
#blnsrv --install 

#reboot 

then stats works 

(QEMU) qom-get path=/machine/peripheral/balloon0 
property=stats-polling-interval 
{u'return': 0} 
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=10 
{u'return': 10} 
(QEMU) qom-get path=/machine/peripheral/balloon0 
property=stats-polling-interval 
{u'return': 10} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last-update 
{u'return': 1355914831} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last-update 
{u'return': 1355914841} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory 
{u'return': 1765306368} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total-memory 
{u'return': 2147074048} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-in 
{u'return': 2440} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-out 
{u'return': 0} 
(QEMU) 


I didn't have tested it with setting polling interval at start, 
I'll try with your last qemu-serve today 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] balloon stats on win2008r2 working

2012-12-19 Thread Alexandre DERUMIER
Works for me,
but I need to send a stats-polling-interval != from the start 
stats-polling-interval
(maybe it's not resend to the balloon driver if the value is equal)

Don't know why balloon device works differently on windows, but it seem to get 
the stats-polling-interval value, only when windows is fully started.


working (started with interval=2)

(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 2}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=3
{u'return': {}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'return': 1765240832}

non working (started with interval=2)

(QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-polling-interval
{u'return': 2}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=2
{u'return': {}}
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}}


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Mercredi 19 Décembre 2012 12:45:31 
Objet: RE: [pve-devel] balloon stats on win2008r2 working 

that does not work here :-/ 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre DERUMIER 
 Sent: Mittwoch, 19. Dezember 2012 12:05 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] balloon stats on win2008r2 working 
 
 - I have just installed a win2008R2 (X64) 
 
 install ballon driver 
 
 -go to hardware devices management, find pci device without driver. 
 - update driver-choose disk 
 
 then using 0.48 virtio driver, choose the whl directory 
 
 
 
 then install service (I put files in c:\program files\balloon) 
 
 #cd c:\program files\balloon 
 #blnsrv --install 
 
 #reboot 
 
 then stats works 
 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats- 
 polling-interval 
 {u'return': 0} 
 (QEMU) qom-set path=/machine/peripheral/balloon0 property=stats- 
 polling-interval value=10 
 {u'return': 10} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats- 
 polling-interval 
 {u'return': 10} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last- 
 update 
 {u'return': 1355914831} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last- 
 update 
 {u'return': 1355914841} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free- 
 memory 
 {u'return': 1765306368} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total- 
 memory 
 {u'return': 2147074048} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-in 
 {u'return': 2440} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap- 
 out 
 {u'return': 0} 
 (QEMU) 
 
 
 I didn't have tested it with setting polling interval at start, I'll try with 
 your last 
 qemu-serve today 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] balloon stats on win2008r2 working

2012-12-19 Thread Alexandre DERUMIER
I have found a simple way to get it working.


- set polling interval at start (2) (current implementation)
  during boot, balloon info return only actual  max_mem.

- when windows will be booted,with balloon service on,
  info ballon will give us last_update value, but not free_mem.

  (last_update value is only return when balloon service is working).

- then,we simply need to launch
  qom-set path=/machine/peripheral/balloon0 property=stats-polling-interval 
value=3   (!= from initial value)

  and now balloon info return free_mem 


Note also that last_update value not increase before resend qom-set, so we can 
also check that, if it's not increasing, resend qom-set


What Do you think ?

- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 19 Décembre 2012 13:09:35 
Objet: Re: [pve-devel] balloon stats on win2008r2 working 

Works for me, 
but I need to send a stats-polling-interval != from the start 
stats-polling-interval 
(maybe it's not resend to the balloon driver if the value is equal) 

Don't know why balloon device works differently on windows, but it seem to get 
the stats-polling-interval value, only when windows is fully started. 


working (started with interval=2) 
 
(QEMU) qom-get path=/machine/peripheral/balloon0 
property=stats-polling-interval 
{u'return': 2} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory 
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}} 
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=3 
{u'return': {}} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory 
{u'return': 1765240832} 

non working (started with interval=2) 
 
(QEMU) qom-get path=/machine/peripheral/balloon0 
property=stats-polling-interval 
{u'return': 2} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory 
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}} 
(QEMU) qom-set path=/machine/peripheral/balloon0 
property=stats-polling-interval value=2 
{u'return': {}} 
(QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free-memory 
{u'error': {u'class': u'GenericError', u'desc': utimer hasn't been enabled or 
guest doesn't support 'stat-free-memory'}} 


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Mercredi 19 Décembre 2012 12:45:31 
Objet: RE: [pve-devel] balloon stats on win2008r2 working 

that does not work here :-/ 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre DERUMIER 
 Sent: Mittwoch, 19. Dezember 2012 12:05 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] balloon stats on win2008r2 working 
 
 - I have just installed a win2008R2 (X64) 
 
 install ballon driver 
 
 -go to hardware devices management, find pci device without driver. 
 - update driver-choose disk 
 
 then using 0.48 virtio driver, choose the whl directory 
 
 
 
 then install service (I put files in c:\program files\balloon) 
 
 #cd c:\program files\balloon 
 #blnsrv --install 
 
 #reboot 
 
 then stats works 
 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats- 
 polling-interval 
 {u'return': 0} 
 (QEMU) qom-set path=/machine/peripheral/balloon0 property=stats- 
 polling-interval value=10 
 {u'return': 10} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats- 
 polling-interval 
 {u'return': 10} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last- 
 update 
 {u'return': 1355914831} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stats-last- 
 update 
 {u'return': 1355914841} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-free- 
 memory 
 {u'return': 1765306368} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-total- 
 memory 
 {u'return': 2147074048} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap-in 
 {u'return': 2440} 
 (QEMU) qom-get path=/machine/peripheral/balloon0 property=stat-swap- 
 out 
 {u'return': 0} 
 (QEMU) 
 
 
 I didn't have tested it with setting polling interval at start, I'll try with 
 your last 
 qemu-serve today 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman

[pve-devel] [PATCH] balloon : reset pooling if balloon driver doesn't return memory stats

2012-12-19 Thread Alexandre Derumier
fix windows stats (tested on win2003  win2008R2)

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   13 +
 1 file changed, 13 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 81a9351..7569d55 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2057,6 +2057,19 @@ sub vmstatus {
$d-{freemem} = $info-{free_mem};
}
 
+if (defined($info-{last_update})  !defined($info-{free_mem})){
+   $qmpclient-queue_cmd($vmid, undef, 'qom-set',
+   path = machine/peripheral/balloon0,
+   property = stats-polling-interval,
+   value = 0);
+
+   $qmpclient-queue_cmd($vmid, undef, 'qom-set',
+   path = machine/peripheral/balloon0,
+   property = stats-polling-interval,
+   value = 2);
+}
+
+
 };
 
 my $blockstatscb = sub {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC : pve-manager : screenshot of template-cloning feature

2012-12-19 Thread Alexandre DERUMIER
Thanks for the feedback.

I'll try to have some designer ressource at work for icons ;)

Don't know, But maybe ,is is exist some opensource icons library for this ?

- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 07:37:41 
Objet: RE: [pve-devel] RFC : pve-manager : screenshot of template-cloning 
feature 

 - vm with templates have a new icons in tree (icons are just crappy samples 
 ;) 

Yes, we need better icons. 

 (maybe this can be improve with a template view ? But this require a new 
 type for templates to regroup them, and I'm a bit lost in extjs code for the 
 moment) 

That would involve a lot of duplicated code, so I would prefer using the 
snapshot panel. 

 - we manage templates  clones in the snapshot panel. (we can create 
 templates on snapshot or NOW (current). 
 - templates icons are displayed when snapshot or now are templates. 

I think that is quite OK. 

 then the clone panel 
 
 we can choose between linked or copy clone mode, 
 
 if copy clone mode, we can choose a mandatory storage destination. 
 (I just put 1 target storage for all disk to not overload the gui, but I can 
 be 
 extended to choose 1 different target by disk) 

looks good. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] balloon : reset pooling if balloon driver doesn't return memory stats

2012-12-20 Thread Alexandre DERUMIER
I uploaded a fixed version - please test. 

Works perfectly ! (win2003  win2008R2)

Thanks !



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 09:23:35 
Objet: RE: [pve-devel] [PATCH] balloon : reset pooling if balloon driver 
doesn't return memory stats 

Just found the bug - I forgot to re-arm the timer. 

I uploaded a fixed version - please test. 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier 
 Sent: Mittwoch, 19. Dezember 2012 16:53 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] [PATCH] balloon : reset pooling if balloon driver 
 doesn't 
 return memory stats 
 
 fix windows stats (tested on win2003  win2008R2) 
 
 Signed-off-by: Alexandre Derumier aderum...@odiso.com 
 --- 
 PVE/QemuServer.pm | 13 + 
 1 file changed, 13 insertions(+) 
 
 diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 
 81a9351..7569d55 100644 
 --- a/PVE/QemuServer.pm 
 +++ b/PVE/QemuServer.pm 
 @@ -2057,6 +2057,19 @@ sub vmstatus { 
 $d-{freemem} = $info-{free_mem}; 
 } 
 
 + if (defined($info-{last_update})  !defined($info-{free_mem})){ 
 + $qmpclient-queue_cmd($vmid, undef, 'qom-set', 
 + path = machine/peripheral/balloon0, 
 + property = stats-polling-interval, 
 + value = 0); 
 + 
 + $qmpclient-queue_cmd($vmid, undef, 'qom-set', 
 + path = machine/peripheral/balloon0, 
 + property = stats-polling-interval, 
 + value = 2); 
 + } 
 + 
 + 
 }; 
 
 my $blockstatscb = sub { 
 -- 
 1.7.10.4 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
Do you mean migration between qemu-kvm 1.2 - qemu 1.3 ?

(because it's not supported)

or migration between qemu 1.3 - qemu 1.3 ?


also,I don't know if it's related, but in the changelog:
http://wiki.qemu.org/ChangeLog/1.3

Live Migration, Save/Restore
The stop and cont commands have new semantics on the destination machine 
during migration. Previously, the outcome depended on whether the commands were 
issued before or after the source connected to the destination QEMU: in 
particular, cont would fail if issued before connection, and undo the 
effect of the -S command-line option if issued after. Starting from this 
version, the effect of stop and cont will always take place at the end of 
migration (overriding the presence or absence of the -S option) and cont will 
never fail. This change should be transparent, since the old behavior was 
usually subject to a race condition.


- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 09:46:01 
Objet: [pve-devel] migration problems since qemu 1.3 

Hello list, 

i've massive migration problems since switching to qemu 1.3. Mostly the 
migration just hangs never finishes and suddenly the vm is just dead / 
not running anymore. 

Has anybody seen this too? 

Greets, 
Stefan 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] live migration doesn't start with balloon enabled

2012-12-20 Thread Alexandre DERUMIER
Hi dietmar,

with balloon, migration doesn't start,
because of qmp command send at start:



  if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd($vmid, 'qom-set',
   path = machine/peripheral/balloon0,
   property = stats-polling-interval,
   value = 2);
}



sub vm_qmp_command {

  die VM $vmid not running\n if !check_running($vmid, $nocheck);



(we should pass $migratefrom to check_running)



do you know how to handle that ? (passing migratedfrom in differents subs ?)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] live migration doesn't start with balloon enabled

2012-12-20 Thread Alexandre DERUMIER
or maybe in

sub check_running {
 die unable to find configuration file for VM $vmid - no such machine\n
if !$nocheck  ! -f $filename;


do we really to check if $filename exist ?



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 11:38:18 
Objet: [pve-devel] live migration doesn't start with balloon enabled 

Hi dietmar, 

with balloon, migration doesn't start, 
because of qmp command send at start: 



if (!defined($conf-{balloon}) || $conf-{balloon}) { 
vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024) 
if $conf-{balloon}; 

vm_mon_cmd($vmid, 'qom-set', 
path = machine/peripheral/balloon0, 
property = stats-polling-interval, 
value = 2); 
} 



sub vm_qmp_command { 

die VM $vmid not running\n if !check_running($vmid, $nocheck); 



(we should pass $migratefrom to check_running) 



do you know how to handle that ? (passing migratedfrom in differents subs ?) 


___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
with last git, I think it's related to balloon driver enabled by default, and 
qmp command send (see my previous mail).


can you try to replace (in QemuServer.pm)

if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd($vmid, 'qom-set',
   path = machine/peripheral/balloon0,
   property = stats-polling-interval,
   value = 2);
}

by

if (!defined($conf-{balloon}) || $conf-{balloon}) {
vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024)
if $conf-{balloon};

vm_mon_cmd_nocheck($vmid, 'qom-set',
   path = machine/peripheral/balloon0,
   property = stats-polling-interval,
   value = 2);
}


(vm_mon_cmd_nocheck)

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 11:48:06 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 

Am 20.12.2012 10:04, schrieb Alexandre DERUMIER: 
 Yes. It works fine with NEWLY started VMs but if the VMs are running 
 more than 1-3 days. It stops working and the VMs just crahs during 
 migration. 
 Maybe vm running since 1-3 days,have more memory used, so I take more time to 
 live migrate. 

I see totally different outputs - the vm crashes and the status output 
stops. 

with git from yesterday i'm just getting this: 
-- 
Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203' 
(10.255.0.22) 
Dec 20 11:34:21 copying disk images 
Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203' 
Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o 
'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock 
--migratedfrom cloud1-1202' failed: exit code 255 
Dec 20 11:34:23 aborting phase 2 - cleanup resources 
Dec 20 11:34:24 ERROR: migration finished with problems (duration 00:00:03) 
TASK ERROR: migration problems 
-- 


 Does it crash at start of the migration ? or in the middle of the migration ? 

At the beginning mostly i see no more output after: 
migration listens on port 6 


 what is your vm conf ? (memory size, storage ?) 
2GB mem, RBD / Ceph Storage 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] usr vm_mon_cmd_nocheck for balloon qmp command at vm_start

2012-12-20 Thread Alexandre Derumier
fix live migration, as we don't have the vm config file

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 81a9351..9c64757 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3001,9 +3001,9 @@ sub vm_start {
# fixme: how do we handle that on migration?
 
if (!defined($conf-{balloon}) || $conf-{balloon}) {
-   vm_mon_cmd($vmid, balloon, value = $conf-{balloon}*1024*1024) 
+   vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024) 
if $conf-{balloon};
-   vm_mon_cmd($vmid, 'qom-set', 
+   vm_mon_cmd_nocheck($vmid, 'qom-set', 
   path = machine/peripheral/balloon0, 
   property = stats-polling-interval, 
   value = 2);
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-20 Thread Alexandre DERUMIER
Ok,thanks, I'll try that this afternoon !



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 14:02:38 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

Hi Alexandre, 

I have finally uploaded an auto-ballooning implementation for pvestatd. 

The algorithm uses the new 'shares' property to distribute RAM accordingly. 

Feel free to test :-) 

- Dietmar 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-server : allow manual ballooning if shares is not defined

2012-12-20 Thread Alexandre Derumier
Allow manual ballooning (qm set --balloon XXX), if shares is not defined or = 0.

if balloon = 0, we set balloon to max_memory

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] allow manual ballooning if shares is not enabled

2012-12-20 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 09ab1e7..9bdbc0a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -897,9 +897,15 @@ __PACKAGE__-register_method({
 
$vmconfig_update_net($rpcenv, $authuser, $conf, $storecfg, 
$vmid, 
  $opt, $param-{$opt});
-
} else {
 
+   if ($opt eq 'balloon'  ((!$conf-{shares}) || 
($conf-{shares}  $conf-{shares} == 0))) { 
+
+   my $balloontarget = $param-{$opt};
+   $balloontarget = $conf-{memory} if $param-{$opt} == 0;
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, balloon, 
value = $balloontarget*1024*1024);
+   }
+
$conf-{$opt} = $param-{$opt};
PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
}
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
i had it again. 
Do you have applied the fix from today about balloning ?
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=95381ce06cea266d40911a7129da6067a1640cbf

I even canot connect anymore through console to this VM. 

mmm, seem that something break qmp on source vm... 
Is the source vm running ? (is ssh working?)




- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 15:27:53 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 

i had it again. 

Migration hangs at: 
Dec 20 15:23:03 starting migration of VM 107 to node 'cloud1-1202' 
(10.255.0.20) 
Dec 20 15:23:03 copying disk images 
Dec 20 15:23:03 starting VM 107 on remote node 'cloud1-1202' 
Dec 20 15:23:06 starting migration tunnel 
Dec 20 15:23:06 starting online/live migration on port 6 

I even canot connect anymore through console to this VM. 

Stefan 

Am 20.12.2012 12:31, schrieb Stefan Priebe - Profihost AG: 
 Hi, 
 
 at least migration works at all ;-) I'll wait until tomorrow and test 
 again. I've restarted all VMs with latest pve-qemu-kvm. 
 
 Thanks! 
 
 Am 20.12.2012 11:57, schrieb Alexandre DERUMIER: 
 with last git, I think it's related to balloon driver enabled by 
 default, and qmp command send (see my previous mail). 
 
 
 can you try to replace (in QemuServer.pm) 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 by 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd_nocheck($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 
 (vm_mon_cmd_nocheck) 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 11:48:06 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 
 Am 20.12.2012 10:04, schrieb Alexandre DERUMIER: 
 Yes. It works fine with NEWLY started VMs but if the VMs are running 
 more than 1-3 days. It stops working and the VMs just crahs during 
 migration. 
 Maybe vm running since 1-3 days,have more memory used, so I take more 
 time to live migrate. 
 
 I see totally different outputs - the vm crashes and the status output 
 stops. 
 
 with git from yesterday i'm just getting this: 
 -- 
 Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203' 
 (10.255.0.22) 
 Dec 20 11:34:21 copying disk images 
 Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203' 
 Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o 
 'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock 
 --migratedfrom cloud1-1202' failed: exit code 255 
 Dec 20 11:34:23 aborting phase 2 - cleanup resources 
 Dec 20 11:34:24 ERROR: migration finished with problems (duration 
 00:00:03) 
 TASK ERROR: migration problems 
 -- 
 
 
 Does it crash at start of the migration ? or in the middle of the 
 migration ? 
 
 At the beginning mostly i see no more output after: 
 migration listens on port 6 
 
 
 what is your vm conf ? (memory size, storage ?) 
 2GB mem, RBD / Ceph Storage 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
Just an idea (not sure it's the problem),can you try to commment

$qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');

in QemuServer.pm, line 2081.

and restart pvedaemon  pvestatd ?



- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 15:50:38 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 
Am 20.12.2012 15:49, schrieb Alexandre DERUMIER: 
 i had it again. 
 Do you have applied the fix from today about balloning ? 
 https://git.proxmox.com/?p=qemu-server.git;a=commit;h=95381ce06cea266d40911a7129da6067a1640cbf
  

Yes. 

 I even canot connect anymore through console to this VM. 
 
 mmm, seem that something break qmp on source vm... 
 Is the source vm running ? (is ssh working?) 
It is marked as running the kvm process is still there. But no service 
is running anymore - so i cannot even connect via ssh anymore. 

Stefan 

 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 15:27:53 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 
 i had it again. 
 
 Migration hangs at: 
 Dec 20 15:23:03 starting migration of VM 107 to node 'cloud1-1202' 
 (10.255.0.20) 
 Dec 20 15:23:03 copying disk images 
 Dec 20 15:23:03 starting VM 107 on remote node 'cloud1-1202' 
 Dec 20 15:23:06 starting migration tunnel 
 Dec 20 15:23:06 starting online/live migration on port 6 
 
 I even canot connect anymore through console to this VM. 
 
 Stefan 
 
 Am 20.12.2012 12:31, schrieb Stefan Priebe - Profihost AG: 
 Hi, 
 
 at least migration works at all ;-) I'll wait until tomorrow and test 
 again. I've restarted all VMs with latest pve-qemu-kvm. 
 
 Thanks! 
 
 Am 20.12.2012 11:57, schrieb Alexandre DERUMIER: 
 with last git, I think it's related to balloon driver enabled by 
 default, and qmp command send (see my previous mail). 
 
 
 can you try to replace (in QemuServer.pm) 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 by 
 
 if (!defined($conf-{balloon}) || $conf-{balloon}) { 
 vm_mon_cmd_nocheck($vmid, balloon, value = 
 $conf-{balloon}*1024*1024) 
 if $conf-{balloon}; 
 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 } 
 
 
 (vm_mon_cmd_nocheck) 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 11:48:06 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 
 Am 20.12.2012 10:04, schrieb Alexandre DERUMIER: 
 Yes. It works fine with NEWLY started VMs but if the VMs are running 
 more than 1-3 days. It stops working and the VMs just crahs during 
 migration. 
 Maybe vm running since 1-3 days,have more memory used, so I take more 
 time to live migrate. 
 
 I see totally different outputs - the vm crashes and the status output 
 stops. 
 
 with git from yesterday i'm just getting this: 
 -- 
 Dec 20 11:34:21 starting migration of VM 100 to node 'cloud1-1203' 
 (10.255.0.22) 
 Dec 20 11:34:21 copying disk images 
 Dec 20 11:34:21 starting VM 100 on remote node 'cloud1-1203' 
 Dec 20 11:34:23 ERROR: online migrate failure - command '/usr/bin/ssh -o 
 'BatchMode=yes' root@10.255.0.22 qm start 100 --stateuri tcp --skiplock 
 --migratedfrom cloud1-1202' failed: exit code 255 
 Dec 20 11:34:23 aborting phase 2 - cleanup resources 
 Dec 20 11:34:24 ERROR: migration finished with problems (duration 
 00:00:03) 
 TASK ERROR: migration problems 
 -- 
 
 
 Does it crash at start of the migration ? or in the middle of the 
 migration ? 
 
 At the beginning mostly i see no more output after: 
 migration listens on port 6 
 
 
 what is your vm conf ? (memory size, storage ?) 
 2GB mem, RBD / Ceph Storage 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-20 Thread Alexandre DERUMIER
Hi Stefan, any news ?

I'm trying to reproduce your problem, but it's works fine for me, no crash...

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 16:09:42 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 
Am 20.12.2012 15:57, schrieb Alexandre DERUMIER: 
 Just an idea (not sure it's the problem),can you try to commment 
 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 in QemuServer.pm, line 2081. 
 
 and restart pvedaemon  pvestatd ? 

This doesn't change anything. 

Right now the kvm process is running on old and new machine. 

An strace on the pid on the new machine shows a loop of: 

 
[pid 28351] ... futex resumed ) = -1 ETIMEDOUT (Connection timed 
out) 
[pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0 
[pid 28351] futex(0x7ff8b8026024, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11801, {1356016143, 
843092000},  unfinished ... 
[pid 28285] mremap(0x7ff77bfe4000, 160378880, 160411648, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160411648, 160448512, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160448512, 160481280, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160481280, 160514048, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160514048, 160546816, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160546816, 160583680, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160583680, 160616448, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160616448, 160649216, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160649216, 160681984, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160681984, 160718848, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160718848, 160751616, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160751616, 160784384, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160784384, 160817152, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160817152, 160854016, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160854016, 160886784, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160886784, 160919552, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160919552, 160952320, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160952320, 160989184, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 160989184, 161021952, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161021952, 161054720, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161054720, 161087488, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161087488, 161124352, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161124352, 161157120, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161157120, 161189888, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161189888, 161222656, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161222656, 161259520, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161259520, 161292288, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161292288, 161325056, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28351] ... futex resumed ) = -1 ETIMEDOUT (Connection timed 
out) 
[pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0 
[pid 28351] futex(0x7ff8b8026024, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11803, {1356016144, 
843283000},  unfinished ... 
[pid 28285] mremap(0x7ff77bfe4000, 161325056, 161357824, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161357824, 161394688, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161394688, 161427456, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28285] mremap(0x7ff77bfe4000, 161427456, 161460224, MREMAP_MAYMOVE) 
= 0x7ff77bfe4000 
[pid 28345] ... restart_syscall resumed ) = -1 ETIMEDOUT (Connection 
timed out) 
[pid 28345] futex(0x7ff8caa2e274, FUTEX_CMP_REQUEUE_PRIVATE, 1, 
2147483647, 0x7ff8caa2e1b0, 872) = 1 
[pid 28347] ... futex resumed ) = 0 
[pid 28345] futex(0x7ff8caa241a8, FUTEX_WAKE_PRIVATE, 1 unfinished ... 
[pid 28347] futex(0x7ff8caa2e1b0, FUTEX_WAKE_PRIVATE, 1 unfinished ... 
[pid 28345] ... futex resumed ) = 0 
[pid 28347] ... futex resumed ) = 0 
[pid 28345] futex(0x7ff8caa2420c, 
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 799, {1356016153, 
954319000},  unfinished ... 
[pid 28347] sendmsg(19, {msg_name(0)=NULL, msg_iov(1)=[{\t, 1}], 
msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 1 
[pid 28347] futex

Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-21 Thread Alexandre DERUMIER
I have done some tests yesterday, with around 10vm, overcommiting the host 
memory a lot, (mixing windows/linux vm)

and it's seem to works fine.

I'll try to do more extensive test next week.


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 14:02:38 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

Hi Alexandre, 

I have finally uploaded an auto-ballooning implementation for pvestatd. 

The algorithm uses the new 'shares' property to distribute RAM accordingly. 

Feel free to test :-) 

- Dietmar 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-21 Thread Alexandre DERUMIER
The current algorithm has some bug - I will try to improve more. 

Not sure about it, But I have seen some vm more ballonning than other. (with 
same shares value).


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 21 Décembre 2012 09:21:13 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

 I have done some tests yesterday, with around 10vm, overcommiting the 
 host memory a lot, (mixing windows/linux vm) 
 
 and it's seem to works fine. 
 
 I'll try to do more extensive test next week. 

The current algorithm has some bug - I will try to improve more. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-storage: new netapp pnfs cluster-mode plugin

2012-12-21 Thread Alexandre DERUMIER
Hi Dietmar,

Can you give me a feedback about this plugin ? I really would like to push it 
upstream.

Regards,

Alexandre


- Mail original - 

De: Alexandre Derumier aderum...@odiso.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 09:09:41 
Objet: [pve-devel] pve-storage: new netapp pnfs cluster-mode plugin 

Hi Dietmar, 

This a new storage plugin, which I need for one of my big customer. 

This is plugin for netapp cluster, using parallel nfs.(pnfs,works perfectly 
with last pve-kernel) 

Plugin is under testing since 2 months, and stable. 

I would like to have it upstream, because it painfull to maintain in my own 
branch. 


This require libxml-simple-perl debian package, but pve-storage don't have any 
depends, so I don't know where to specify it. 


Details of the implementation are in the commit. 

Regards, 

Alexandre 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-21 Thread Alexandre DERUMIER
Hi Stefan,

I'll try to reproduce it, maybe qemu-devel can help too ?

I'll be offline until 26/12 (christmas).

Mery Xmas to all.

Alexandre

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 21 Décembre 2012 14:51:54 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 

even more news. The kvm is repsonsive again after cancelling the 
migration and waiting around 1-2 minutes. 

While these two minutes - the kvm process on the source host is then 
running at 100% CPU. 

Greets, 
Stefan 
Am 21.12.2012 14:46, schrieb Stefan Priebe - Profihost AG: 
 
 This time it hangs at the first query-migrate: 
 -- 
 Dec 21 14:44:43 starting migration of VM 100 to node 'cloud1-1203' 
 (10.255.0.22) 
 Dec 21 14:44:43 copying disk images 
 Dec 21 14:44:43 starting VM 100 on remote node 'cloud1-1203' 
 Dec 21 14:44:46 starting migration tunnel 
 Dec 21 14:44:46 starting online/live migration on port 6 
 Dec 21 14:44:46 migrate-set-capabilities, capabilities = [HASH(0x3933ed0)] 
 Dec 21 14:44:46 migrate-set-cache-size, value = 429496729 
 Dec 21 14:44:46 start migrate tcp:localhost:6 
 Dec 21 14:44:48 query-migrate 
 --- 
 
 I can reproduce this by assign min. 4GB of memory to a machine and then 
 fill the buffers and cache by: 
 
 find / -type f -print |xargs cat /dev/null 
 
 And then start a migrate. 
 
 Stefan 
 Am 21.12.2012 11:43, schrieb Stefan Priebe - Profihost AG: 
 Hi Alexandre, 
 
 i've added some debugging / logging code. 
 
 The output stops / hangs at query migrate. See here: 
 
 Dec 21 11:41:59 starting migration of VM 100 to node 'cloud1-1203' 
 (10.255.0.22) 
 Dec 21 11:41:59 copying disk images 
 Dec 21 11:41:59 starting VM 100 on remote node 'cloud1-1203' 
 Dec 21 11:42:02 starting migration tunnel 
 Dec 21 11:42:03 starting online/live migration on port 6 
 Dec 21 11:42:03 migrate-set-capabilities, capabilities = 
 [HASH(0x39a9fb0)] 
 Dec 21 11:42:03 migrate-set-cache-size, value = 429496729 
 Dec 21 11:42:03 start migrate tcp:localhost:6 
 Dec 21 11:42:05 query-migrate 
 Dec 21 11:42:05 migration status: active (transferred 468063329, 
 remaining 3764068352), total 4303814656) 
 Dec 21 11:42:07 query-migrate 
 
 I can't even ping the VM anymore. 
 
 Stefan 
 
 Am 21.12.2012 08:58, schrieb Alexandre DERUMIER: 
 Hi Stefan, any news ? 
 
 I'm trying to reproduce your problem, but it's works fine for me, no 
 crash... 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 20 Décembre 2012 16:09:42 
 Objet: Re: [pve-devel] migration problems since qemu 1.3 
 
 Hi, 
 Am 20.12.2012 15:57, schrieb Alexandre DERUMIER: 
 Just an idea (not sure it's the problem),can you try to commment 
 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 in QemuServer.pm, line 2081. 
 
 and restart pvedaemon  pvestatd ? 
 
 This doesn't change anything. 
 
 Right now the kvm process is running on old and new machine. 
 
 An strace on the pid on the new machine shows a loop of: 
 
  
 [pid 28351] ... futex resumed ) = -1 ETIMEDOUT (Connection timed 
 out) 
 [pid 28351] futex(0x7ff8b8025388, FUTEX_WAKE_PRIVATE, 1) = 0 
 [pid 28351] futex(0x7ff8b8026024, 
 FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 11801, {1356016143, 
 843092000},  unfinished ... 
 [pid 28285] mremap(0x7ff77bfe4000, 160378880, 160411648, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160411648, 160448512, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160448512, 160481280, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160481280, 160514048, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160514048, 160546816, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160546816, 160583680, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160583680, 160616448, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160616448, 160649216, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160649216, 160681984, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160681984, 160718848, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160718848, 160751616, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160751616, 160784384, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160784384, 160817152, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160817152, 160854016, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid 28285] mremap(0x7ff77bfe4000, 160854016, 160886784, MREMAP_MAYMOVE) 
 = 0x7ff77bfe4000 
 [pid

Re: [pve-devel] migration problems since qemu 1.3

2012-12-23 Thread Alexandre DERUMIER
maybe can you try to recompile pve-qemu-kvm without this patch :

include fix-off-by-1-error-in-RAM-migration-code.patch
https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=e01e677960fbc6787f8358543047307fca67facb

this come from this git
http://git.qemu.org/?p=qemu.git;a=commit;h=7ec81e56edc2b2007ce0ae3982aa5c18af9546ab





- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Dietmar Maurer diet...@proxmox.com 
Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Samedi 22 Décembre 2012 20:19:37 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 

 Please can we track the bug here: 
 https://bugzilla.proxmox.com/show_bug.cgi?id=298 

didn't know about the bug report. Great to see that i'm not the only one. 

 @Stefan: Does it work when the VM does not use and network device? 

No that changes nothing. I've removed all network devices and VM 
migration is still not working ;-( 

Mery XMAS. 

Greets, 
Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] migration problems since qemu 1.3

2012-12-23 Thread Alexandre DERUMIER
How much mem was in use? Died you tried my suggestion with 
find / -type f -print | xargs cat /dev/null 

I filled memory buffer with fio read benchmark. (tested with 4GB and 16G guests)

storage tested: nfs and iscsi. (don't have my rbd cluster for now)










- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Dimanche 23 Décembre 2012 14:21:07 
Objet: Re: [pve-devel] migration problems since qemu 1.3 

Hi, 
Am 23.12.2012 14:18, schrieb Alexandre DERUMIER: 
 I have redone tests on my side, with linux and windows guests, vms with 4 - 
 16GB ram 
 I really can't reproduce your problem. 
 
 migration speed is around 400MB/S. 
How much mem was in use? Died you tried my suggestion with 
find / -type f -print | xargs cat /dev/null 

BEFORE you migrate? 

 maybe can you try the last qemu git version ? I see some big migration patchs 
 commited this last days 
Already tried that. No Change. 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-25 Thread Alexandre DERUMIER
I can even start it with daemonize from shell. Migration works fine. It 
just doesn't work when started from PVE. 
This is crazy  I don't see any difference from starting it from shell or 
from pve

And if your remove the balloon device, migration is 100% working, starting from 
pve ?

just to be sure, can you try to do info balloon from human monitor console ? 
(I would like to see if the balloon driver is correctly working)


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mardi 25 Décembre 2012 10:05:10 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

I can even start it with daemonize from shell. Migration works fine. It 
just doesn't work when started from PVE. 

Stefan 

Am 24.12.2012 15:48, schrieb Alexandre DERUMIER: 
 does it work if you keep device virtio-balloon enabled, 
 
 but comment in qemuserver.pm 
 
 line 3005 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 
 
 and 
 
 line2081 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 ? 
 
 - Mail original - 
 
 De: Alexandre DERUMIER aderum...@odiso.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:38:13 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 maybe it's related to qmp queries to balloon driver (for stats) during 
 migration ? 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:32:52 
 Objet: Baloon Device is the problem! Re: [pve-devel] migration problems since 
 qemu 1.3 
 
 Hello, 
 
 it works fine / again if / when i remove the baloon pci device. 
 
 If i remove this line everything is fine again! 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 
 Greets 
 Stefan 
 Am 24.12.2012 15:05, schrieb Stefan Priebe: 
 
 Am 24.12.2012 14:08, schrieb Dietmar Maurer: 
 virtio0: 
 cephkvmpool1:vm-105-disk- 
 1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G 
 
 Please can you also test without ceph? 
 
 The same. I now also tried a debian netboot cd (6.0.5) but then 32bit 
 doesn't work too. I had no disks attached at all. 
 
 I filled the tmpfs ramdisk under /dev with 
 dd if=/dev/urandom of=/dev/myfile bs=1M count=900 
 
 Greets, 
 Stefan 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Alexandre DERUMIER
I see that Dietmar have change recently
vm_mon_cmd($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

to

vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

https://git.proxmox.com/?p=qemu-server.git;a=blobdiff;f=PVE/QemuServer.pm;h=165eaf6be6e5fe4b1c88454d28b113bc2b1f20af;hp=81a935176aca16e013fd6987f2ddbc72260092cf;hb=95381ce06cea266d40911a7129da6067a1640cbf;hpb=4bdb05142cfcef09495a45ffb256955f7b947caa


so maybe before, the migrate_set_downtime was not apply (because the vm_mon_cmd 
check if the vm config file exist on the target).

Do you have any migrate_downtime parameters in your vm config 
Because it shouldn't be sent if not

my $migrate_downtime = $defaults-{migrate_downtime};
$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
if (defined($migrate_downtime)) {
eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime); };
}

...

- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 13:18:33 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

Hi, 

the difference is in sub vm_start in QemuServer.pm. After the kvm 
process is started PVE sends some human monitor commands. 

If i comment this line: 
# eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

Everything works fine and everything is OK again. 

I also get this in my logs but i checked $migrate_downtime is 1 so it IS 
a NUMBER: 

Dec 26 13:09:27 cloud1-1202 qm[8726]: VM 105 qmp command failed - VM 105 
qmp command 'migrate_set_downtime' failed - Invalid parameter type for 
'value', expected: number 

Stefan 
Am 26.12.2012 07:45, schrieb Alexandre DERUMIER: 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 This is crazy  I don't see any difference from starting it from shell or 
 from pve 
 
 And if your remove the balloon device, migration is 100% working, starting 
 from pve ? 
 
 just to be sure, can you try to do info balloon from human monitor console 
 ? (I would like to see if the balloon driver is correctly working) 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Mardi 25 Décembre 2012 10:05:10 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 
 Stefan 
 
 Am 24.12.2012 15:48, schrieb Alexandre DERUMIER: 
 does it work if you keep device virtio-balloon enabled, 
 
 but comment in qemuserver.pm 
 
 line 3005 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 
 
 and 
 
 line2081 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 ? 
 
 - Mail original - 
 
 De: Alexandre DERUMIER aderum...@odiso.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:38:13 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 maybe it's related to qmp queries to balloon driver (for stats) during 
 migration ? 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:32:52 
 Objet: Baloon Device is the problem! Re: [pve-devel] migration problems 
 since qemu 1.3 
 
 Hello, 
 
 it works fine / again if / when i remove the baloon pci device. 
 
 If i remove this line everything is fine again! 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 
 Greets 
 Stefan 
 Am 24.12.2012 15:05, schrieb Stefan Priebe: 
 
 Am 24.12.2012 14:08, schrieb Dietmar Maurer: 
 virtio0: 
 cephkvmpool1:vm-105-disk- 
 1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G 
 
 Please can you also test without ceph? 
 
 The same. I now also tried a debian netboot cd (6.0.5) but then 32bit 
 doesn't work too. I had no disks attached at all. 
 
 I filled the tmpfs ramdisk under /dev with 
 dd if=/dev/urandom of=/dev/myfile bs=1M count=900 
 
 Greets, 
 Stefan 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Alexandre DERUMIER
the migrate_downtime = 1, come from QemuServer.pm


migrate_downtime = {
optional = 1,
type = 'integer',
description = Set maximum tolerated downtime (in seconds) for 
migrations.,
minimum = 0,
default = 1,  DEFAULT VALUE
},


I don't know if we really need a default value, because it's always setting 
migrate_downtime to 1. 


Now, I don't know what really happen to you, because recent changes can set 
migrate_downtime to the target vm (vm_mon_cmd_nocheck) 
But I don't think it's doing something because the migrate_downtime should be 
done one sourcevm.

Can you try to replace vm_mon_cmd_nocheck by vm_mon_cmd ? (So it should works 
only at vm_start but not when live migrate occur on target vm)


also migrate_downtime, should be set on sourcevm before the migration begin 
(QemuMigrate.pm). I don't know why we are setting it at vm start.


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 13:18:33 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

Hi, 

the difference is in sub vm_start in QemuServer.pm. After the kvm 
process is started PVE sends some human monitor commands. 

If i comment this line: 
# eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = $migrate_downtime); }; 

Everything works fine and everything is OK again. 

I also get this in my logs but i checked $migrate_downtime is 1 so it IS 
a NUMBER: 

Dec 26 13:09:27 cloud1-1202 qm[8726]: VM 105 qmp command failed - VM 105 
qmp command 'migrate_set_downtime' failed - Invalid parameter type for 
'value', expected: number 

Stefan 
Am 26.12.2012 07:45, schrieb Alexandre DERUMIER: 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 This is crazy  I don't see any difference from starting it from shell or 
 from pve 
 
 And if your remove the balloon device, migration is 100% working, starting 
 from pve ? 
 
 just to be sure, can you try to do info balloon from human monitor console 
 ? (I would like to see if the balloon driver is correctly working) 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Mardi 25 Décembre 2012 10:05:10 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 I can even start it with daemonize from shell. Migration works fine. It 
 just doesn't work when started from PVE. 
 
 Stefan 
 
 Am 24.12.2012 15:48, schrieb Alexandre DERUMIER: 
 does it work if you keep device virtio-balloon enabled, 
 
 but comment in qemuserver.pm 
 
 line 3005 
 vm_mon_cmd_nocheck($vmid, 'qom-set', 
 path = machine/peripheral/balloon0, 
 property = stats-polling-interval, 
 value = 2); 
 
 
 and 
 
 line2081 
 $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon'); 
 
 ? 
 
 - Mail original - 
 
 De: Alexandre DERUMIER aderum...@odiso.com 
 À: Stefan Priebe s.pri...@profihost.ag 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:38:13 
 Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
 since qemu 1.3 
 
 maybe it's related to qmp queries to balloon driver (for stats) during 
 migration ? 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Lundi 24 Décembre 2012 15:32:52 
 Objet: Baloon Device is the problem! Re: [pve-devel] migration problems 
 since qemu 1.3 
 
 Hello, 
 
 it works fine / again if / when i remove the baloon pci device. 
 
 If i remove this line everything is fine again! 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
 
 Greets 
 Stefan 
 Am 24.12.2012 15:05, schrieb Stefan Priebe: 
 
 Am 24.12.2012 14:08, schrieb Dietmar Maurer: 
 virtio0: 
 cephkvmpool1:vm-105-disk- 
 1,iops_rd=215,iops_wr=155,mbps_rd=130,mbps_wr=90,size=20G 
 
 Please can you also test without ceph? 
 
 The same. I now also tried a debian netboot cd (6.0.5) but then 32bit 
 doesn't work too. I had no disks attached at all. 
 
 I filled the tmpfs ramdisk under /dev with 
 dd if=/dev/urandom of=/dev/myfile bs=1M count=900 
 
 Greets, 
 Stefan 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Baloon Device is the problem! Re: migration problems since qemu 1.3

2012-12-26 Thread Alexandre DERUMIER
It also isn't accepted you get the answer back that 1 isn't a number. 
Don't know what format a number needs? 

the default migrate_downtime is 30ms by default (if we doesn't send qmp 
command).
I think we set 1 sec by default, because of infinite migration (30ms was too 
short in past with high memory change workload).
I see that last migration code from qemu git (1.4), seem to improve a lot the 
downtime (from 500 - 30ms) with high memory change workload.
Don't know if qemu 1.3 works fine without setting downtime to 1sec. 

I think we need to cast the value as int for the json

 vm_mon_cmd($vmid, migrate_set_downtime, value = $migrate_downtime);

-

 vm_mon_cmd($vmid, migrate_set_downtime, value = int($migrate_downtime));


I remember same problem with qemu_block_set_io_throttle()
 vm_mon_cmd($vmid, block_set_io_throttle, device = $deviceid, bps = 
int($bps), bps_rd = int($bps_rd), bps_wr = int($bps_wr), iops = int($iops), 
iops_rd = int($iops_rd), iops_wr = int($iops_wr));

So maybe does it send crap if the value is not casted ?  


Also the value should not be int but float, qmp doc said that we can use 0.5, 
0.30, as value.



also query-migrate returns some new 2 cools values about downtime, I think we 
should display them in query migrate log

- downtime: only present when migration has finished correctly
  total amount in ms for downtime that happened (json-int)
- expected-downtime: only present while migration is active
total amount in ms for downtime that was calculated on
the last bitmap round (json-int)

- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 20:52:56 
Objet: Re: [pve-devel] Baloon Device is the problem! Re: migration problems 
since qemu 1.3 

Hi, 

Am 26.12.2012 17:40, schrieb Alexandre DERUMIER: 
 I don't know if we really need a default value, because it's always setting 
 migrate_downtime to 1. 
It also isn't accepted you get the answer back that 1 isn't a number. 
Don't know what format a number needs? 

 Now, I don't know what really happen to you, because recent changes can set 
 migrate_downtime to the target vm (vm_mon_cmd_nocheck) 
 But I don't think it's doing something because the migrate_downtime should be 
 done one sourcevm. 
You get the error message that 1 isn't a number. If i get this message 
migration fails after. 

 Can you try to replace vm_mon_cmd_nocheck by vm_mon_cmd ? (So it should works 
 only at vm_start but not when live migrate occur on target vm) 
Done - works see my other post. 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH V2] - fix setting migration parameters

2012-12-26 Thread Alexandre DERUMIER
somes comments:

 - default = 1, 
 + default = 0, 

Not sure about  lower default migration downtime value to 0,

because 0 downtime is nearly impossible to target. 

Default qemu value is 0.030, so maybe can we simply remove
   
- default = 0, 

and don't send any qmp command by default.



+ PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime*1); 
+ PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_speed, value = 
$migrate_speed*1); 
try

+ PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
int($migrate_downtime)); 
+ PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_speed, value = 
int($migrate_speed)); 
 (more clean)


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 26 Décembre 2012 23:17:56 
Objet: [pve-devel] [PATCH V2] - fix setting migration parameters 

- move migration speed/downtime from QemuServer vm_start to 
QemuMigrate phase2 
- lower default migration downtime value to 0 

--- 
PVE/QemuMigrate.pm | 30 ++ 
PVE/QemuServer.pm | 17 + 
2 files changed, 27 insertions(+), 20 deletions(-) 

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm 
index 0711681..af8813c 100644 
--- a/PVE/QemuMigrate.pm 
+++ b/PVE/QemuMigrate.pm 
@@ -323,24 +323,46 @@ sub phase2 { 
$self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport); 

$self-log('info', starting online/live migration on port $lport); 
- # start migration 

- my $start = time(); 
+ # load_defaults 
+ my $defaults = PVE::QemuServer::load_defaults(); 
+ 
+ # always set migrate speed (overwrite kvm default of 32m) 
+ # we set a very hight default of 8192m which is basically unlimited 
+ my $migrate_speed = $defaults-{migrate_speed} || 8192; 
+ $migrate_speed = $conf-{migrate_speed} || $migrate_speed; 
+ $migrate_speed = $migrate_speed * 1048576; 
+ $self-log('info', migrate_set_speed: $migrate_speed); 
+ eval { 
+ # *1 ensures that JSON module convert the value to number 
+ PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_speed, value = 
$migrate_speed*1); 
+ }; 
+ $self-log('info', migrate_set_speed error: $@) if $@; 
+ 
+ my $migrate_downtime = $defaults-{migrate_downtime}; 
+ $migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime}); 
+ $self-log('info', migrate_set_downtime: $migrate_downtime); 
+ eval { 
+ # *1 ensures that JSON module convert the value to number 
+ PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime*1); 
+ }; 
+ $self-log('info', migrate_set_downtime error: $@) if $@; 

my $capabilities = {}; 
$capabilities-{capability} = xbzrle; 
$capabilities-{state} = JSON::false; 
- 
eval { 
PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate-set-capabilities, 
capabilities = [$capabilities]); 
}; 

- #set cachesize 10% of the total memory 
+ # set cachesize 10% of the total memory 
my $cachesize = int($conf-{memory}*1048576/10); 
eval { 
PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate-set-cache-size, value = 
$cachesize); 
}; 

+ # start migration 
+ my $start = time(); 
eval { 
PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate, uri = 
tcp:localhost:$lport); 
}; 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 1d4c275..bb7fd16 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -385,7 +385,7 @@ EODESCR 
type = 'integer', 
description = Set maximum tolerated downtime (in seconds) for migrations., 
minimum = 0, 
- default = 1, 
+ default = 0, 
}, 
cdrom = { 
optional = 1, 
@@ -2979,21 +2979,6 @@ sub vm_start { 
warn $@ if $@; 
} 

- # always set migrate speed (overwrite kvm default of 32m) 
- # we set a very hight default of 8192m which is basically unlimited 
- my $migrate_speed = $defaults-{migrate_speed} || 8192; 
- $migrate_speed = $conf-{migrate_speed} || $migrate_speed; 
- $migrate_speed = $migrate_speed * 1048576; 
- eval { 
- vm_mon_cmd_nocheck($vmid, migrate_set_speed, value = $migrate_speed); 
- }; 
- 
- my $migrate_downtime = $defaults-{migrate_downtime}; 
- $migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime}); 
- if (defined($migrate_downtime)) { 
- eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime); }; 
- } 
- 
if($migratedfrom) { 
my $capabilities = {}; 
$capabilities-{capability} = xbzrle; 
-- 
1.7.10.4 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] - fix setting migration parameters

2012-12-26 Thread Alexandre Derumier
From: Stefan Priebe s.pri...@profihost.ag

 - move migration speed/downtime from QemuServer vm_start to
   QemuMigrate phase2
 - lower default migration downtime value to 0

changelog by aderumier

 - remove default value of 1s for migrate_downtime
 - add logs downtime and expected downtime migration stats
 - only send qmp migrate_downtime if migrate_downtime is defined
 - add errors logs on qm start of target vm.
 - cast int() for json values

tested with youtube video playing, no qmp migrate_downtime (default of 30ms), 
the downtime is around 500-600ms.
---
 PVE/QemuMigrate.pm |   43 +++
 PVE/QemuServer.pm  |   16 
 2 files changed, 35 insertions(+), 24 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0711681..dbbeb69 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -312,7 +312,10 @@ sub phase2 {
if ($line =~ m/^migration listens on port (\d+)$/) {
$rport = $1;
}
-}, errfunc = sub {});
+}, errfunc = sub {
+   my $line = shift;
+   print $line.\n;
+});
 
 die unable to detect remote migration port\n if !$rport;
 
@@ -323,24 +326,46 @@ sub phase2 {
 $self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport);
 
 $self-log('info', starting online/live migration on port $lport);
-# start migration
 
-my $start = time();
+# load_defaults
+my $defaults = PVE::QemuServer::load_defaults();
+
+# always set migrate speed (overwrite kvm default of 32m)
+# we set a very hight default of 8192m which is basically unlimited
+my $migrate_speed = $defaults-{migrate_speed} || 8192;
+$migrate_speed = $conf-{migrate_speed} || $migrate_speed;
+$migrate_speed = $migrate_speed * 1048576;
+$self-log('info', migrate_set_speed: $migrate_speed);
+eval {
+PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_speed, value 
= int($migrate_speed));
+};
+$self-log('info', migrate_set_speed error: $@) if $@;
+
+my $migrate_downtime = $defaults-{migrate_downtime};
+$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
+if (defined($migrate_downtime)) {
+   $self-log('info', migrate_set_downtime: $migrate_downtime);
+   eval {
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = int($migrate_downtime));
+   };
+   $self-log('info', migrate_set_downtime error: $@) if $@;
+}
 
 my $capabilities = {};
 $capabilities-{capability} =  xbzrle;
 $capabilities-{state} = JSON::false;
-
 eval {
PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate-set-capabilities, 
capabilities = [$capabilities]);
 };
 
-#set cachesize 10% of the total memory
+# set cachesize 10% of the total memory
 my $cachesize = int($conf-{memory}*1048576/10);
 eval {
PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate-set-cache-size, 
value = $cachesize);
 };
 
+# start migration
+my $start = time();
 eval {
 PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate, uri = 
tcp:localhost:$lport);
 };
@@ -353,7 +378,6 @@ sub phase2 {
 while (1) {
$i++;
my $avglstat = $lstat/$i if $lstat;
-
usleep($usleep);
my $stat;
eval {
@@ -375,7 +399,8 @@ sub phase2 {
my $delay = time() - $start;
if ($delay  0) {
my $mbps = sprintf %.2f, $conf-{memory}/$delay;
-   $self-log('info', migration speed: $mbps MB/s);
+   $self-log('info', migration speed: $mbps MB/s - downtime 
$stat-{downtime} ms);
+   
}
}
 
@@ -397,11 +422,13 @@ sub phase2 {
my $xbzrlepages = $stat-{xbzrle-cache}-{pages} || 0;
my $xbzrlecachemiss = $stat-{xbzrle-cache}-{cache-miss} 
|| 0;
my $xbzrleoverflow = $stat-{xbzrle-cache}-{overflow} || 0;
+   my $expected_downtime = $stat-{expected-downtime} || 0;
+
#reduce sleep if remainig memory if lower than the everage 
transfert 
$usleep = 30 if $avglstat  $rem  $avglstat;
 
$self-log('info', migration status: $stat-{status} 
(transferred ${trans},  .
-  remaining ${rem}), total ${total}));
+  remaining ${rem}), total ${total}, expected 
downtime ${expected_downtime}));
 
#$self-log('info', migration xbzrle cachesize: 
${xbzrlecachesize} transferred ${xbzrlebytes} pages ${xbzrlepages} cachemiss 
${xbzrlecachemiss} overflow ${xbzrleoverflow});
}
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 165eaf6..b168c74 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -385,7 +385,6 @@ EODESCR
type = 'integer',
description = Set maximum tolerated downtime (in seconds) for 
migrations.,
minimum = 0,
-   default = 

Re: [pve-devel] [PATCH V2] - fix setting migration parameters

2012-12-26 Thread Alexandre DERUMIER
 Cleaner code. Nobody expects that those values were set at vm start. 

We set that since version 1.0 - so 'everybody' expect that this value is set 
at startup. 

I agree with stefan, this value should be set before migration.

currently, we can't change the migrate_downtime without restart the vm.

(example:
 I have a migration which take to much time because of too low migrate_downtime 
value, 
 I stop the migration, 
 I increase the migrate_downtime, 
 I restart the migration)



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 07:27:29 
Objet: Re: [pve-devel] [PATCH V2] - fix setting migration parameters 

 Cleaner code. Nobody expects that those values were set at vm start. 

We set that since version 1.0 - so 'everybody' expect that this value is set at 
startup. 

  - lower default migration downtime value to 0 
  
  We do not want that, because this is known to cause long wait times on 
 busy VMs. 
 
 A value of 1 does not work for me. Whole vm stalls immediately and socket 
 is unavailable. Migration than takes 5-10 minutes of an idle vm. 

So this is a qemu bug - we should try to fix that. 

Else any VM which use migrate_downtime is broken! 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] - fix setting migration parameters

2012-12-26 Thread Alexandre DERUMIER
 
 changelog by aderumier 
 
 - remove default value of 1s for migrate_downtime 

why? - we added that because the default cause seroius problems! 
 This was because of stefan problem, and qemu 1.3 seem to doing migration fine 
with default 30ms.
 What block the monitor exactly ? too low migrate_downtime ?

 - add logs downtime and expected downtime migration stats 

Please can you send an extra patch for that? 
sure no problem

 - only send qmp migrate_downtime if migrate_downtime is defined 

I want to have that as default (set on startup). 

 - add errors logs on qm start of target vm. 

great, but can we have an extra commit for that? Those changes seems to be 
unrelated. 
sure no problem


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 07:21:38 
Objet: RE: [pve-devel] [PATCH] - fix setting migration parameters 

 - move migration speed/downtime from QemuServer vm_start to 
 QemuMigrate phase2 

This is not needed - or why do we need that? 

 - lower default migration downtime value to 0 
 
 changelog by aderumier 
 
 - remove default value of 1s for migrate_downtime 

why? - we added that because the default cause seroius problems! 

 - add logs downtime and expected downtime migration stats 

Please can you send an extra patch for that? 

 - only send qmp migrate_downtime if migrate_downtime is defined 

I want to have that as default (set on startup). 

 - add errors logs on qm start of target vm. 

great, but can we have an extra commit for that? Those changes seems to be 
unrelated. 

 - cast int() for json values 
 
 tested with youtube video playing, no qmp migrate_downtime (default of 
 30ms), the downtime is around 500-600ms. 
 --- 
 PVE/QemuMigrate.pm | 43 +++--- 
 - 
 PVE/QemuServer.pm | 16  
 2 files changed, 35 insertions(+), 24 deletions(-) 
 
 diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 
 0711681..dbbeb69 100644 
 --- a/PVE/QemuMigrate.pm 
 +++ b/PVE/QemuMigrate.pm 
 @@ -312,7 +312,10 @@ sub phase2 { 
 if ($line =~ m/^migration listens on port (\d+)$/) { 
 $rport = $1; 
 } 
 - }, errfunc = sub {}); 
 + }, errfunc = sub { 
 + my $line = shift; 
 + print $line.\n; 
 + }); 
 
 die unable to detect remote migration port\n if !$rport; 
 
 @@ -323,24 +326,46 @@ sub phase2 { 
 $self-{tunnel} = $self-fork_tunnel($self-{nodeip}, $lport, $rport); 
 
 $self-log('info', starting online/live migration on port $lport); 
 - # start migration 
 
 - my $start = time(); 
 + # load_defaults 
 + my $defaults = PVE::QemuServer::load_defaults(); 
 + 
 + # always set migrate speed (overwrite kvm default of 32m) 
 + # we set a very hight default of 8192m which is basically unlimited 
 + my $migrate_speed = $defaults-{migrate_speed} || 8192; 
 + $migrate_speed = $conf-{migrate_speed} || $migrate_speed; 
 + $migrate_speed = $migrate_speed * 1048576; 
 + $self-log('info', migrate_set_speed: $migrate_speed); 
 + eval { 
 + PVE::QemuServer::vm_mon_cmd_nocheck($vmid, 
 migrate_set_speed, value = int($migrate_speed)); 
 + }; 
 + $self-log('info', migrate_set_speed error: $@) if $@; 
 + 
 + my $migrate_downtime = $defaults-{migrate_downtime}; 
 + $migrate_downtime = $conf-{migrate_downtime} if defined($conf- 
 {migrate_downtime}); 
 + if (defined($migrate_downtime)) { 
 + $self-log('info', migrate_set_downtime: $migrate_downtime); 
 + eval { 
 + PVE::QemuServer::vm_mon_cmd_nocheck($vmid, 
 migrate_set_downtime, value = int($migrate_downtime)); 
 + }; 
 + $self-log('info', migrate_set_downtime error: $@) if $@; 
 + } 
 
 my $capabilities = {}; 
 $capabilities-{capability} = xbzrle; 
 $capabilities-{state} = JSON::false; 
 - 
 eval { 
 PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate-set- 
 capabilities, capabilities = [$capabilities]); 
 }; 
 
 - #set cachesize 10% of the total memory 
 + # set cachesize 10% of the total memory 
 my $cachesize = int($conf-{memory}*1048576/10); 
 eval { 
 PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate-set- 
 cache-size, value = $cachesize); 
 }; 
 
 + # start migration 
 + my $start = time(); 
 eval { 
 PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate, uri = 
 tcp:localhost:$lport); 
 }; 
 @@ -353,7 +378,6 @@ sub phase2 { 
 while (1) { 
 $i++; 
 my $avglstat = $lstat/$i if $lstat; 
 - 
 usleep($usleep); 
 my $stat; 
 eval { 
 @@ -375,7 +399,8 @@ sub phase2 { 
 my $delay = time() - $start; 
 if ($delay  0) { 
 my $mbps = sprintf %.2f, $conf-{memory}/$delay; 
 - $self-log('info', migration speed: $mbps MB/s); 
 + $self-log('info', migration speed: $mbps MB/s - 
 downtime 
 +$stat-{downtime} ms); 
 + 
 } 
 } 
 
 @@ -397,11 +422,13 @@ sub phase2 { 
 my $xbzrlepages = $stat-{xbzrle-cache}-{pages} || 0; 
 my $xbzrlecachemiss = $stat-{xbzrle-cache}-{cache- 
 miss} || 0; 
 my $xbzrleoverflow = $stat-{xbzrle-cache}-{overflow} 
 || 0; 
 + my $expected_downtime = $stat-{expected-downtime} || 
 0

[pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 282cbc5..38f1d05 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -402,7 +402,8 @@ sub phase2 {
my $delay = time() - $start;
if ($delay  0) {
my $mbps = sprintf %.2f, $conf-{memory}/$delay;
-   $self-log('info', migration speed: $mbps MB/s);
+   my $downtime = $stat-{downtime} || 0;
+   $self-log('info', migration speed: $mbps MB/s - downtime 
$downtime ms);
}
}
 
@@ -424,11 +425,12 @@ sub phase2 {
my $xbzrlepages = $stat-{xbzrle-cache}-{pages} || 0;
my $xbzrlecachemiss = $stat-{xbzrle-cache}-{cache-miss} 
|| 0;
my $xbzrleoverflow = $stat-{xbzrle-cache}-{overflow} || 0;
+   my $expected_downtime = $stat-{expected-downtime} || 0;
#reduce sleep if remainig memory if lower than the everage 
transfert 
$usleep = 30 if $avglstat  $rem  $avglstat;
 
$self-log('info', migration status: $stat-{status} 
(transferred ${trans},  .
-  remaining ${rem}), total ${total}));
+  remaining ${rem}), total ${total}) , expected 
downtime ${expected_downtime});
 
#$self-log('info', migration xbzrle cachesize: 
${xbzrlecachesize} transferred ${xbzrlebytes} pages ${xbzrlepages} cachemiss 
${xbzrlecachemiss} overflow ${xbzrleoverflow});
}
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/3] move qmp migrate_set_down migrate_set_speed to qemumigrate

2012-12-27 Thread Alexandre Derumier
so we can set the values when the vm is running
also use int() to get json working

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |   24 
 PVE/QemuServer.pm  |   15 ---
 2 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0711681..9ca8f87 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -327,6 +327,30 @@ sub phase2 {
 
 my $start = time();
 
+# load_defaults
+my $defaults = PVE::QemuServer::load_defaults();
+
+# always set migrate speed (overwrite kvm default of 32m)
+# we set a very hight default of 8192m which is basically unlimited
+my $migrate_speed = $defaults-{migrate_speed} || 8192;
+$migrate_speed = $conf-{migrate_speed} || $migrate_speed;
+$migrate_speed = $migrate_speed * 1048576;
+$self-log('info', migrate_set_speed: $migrate_speed);
+eval {
+PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_speed, value 
= int($migrate_speed));
+};
+$self-log('info', migrate_set_speed error: $@) if $@;
+
+my $migrate_downtime = $defaults-{migrate_downtime};
+$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
+if (defined($migrate_downtime)) {
+   $self-log('info', migrate_set_downtime: $migrate_downtime);
+   eval {
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = int($migrate_downtime));
+   };
+   $self-log('info', migrate_set_downtime error: $@) if $@;
+}
+
 my $capabilities = {};
 $capabilities-{capability} =  xbzrle;
 $capabilities-{state} = JSON::false;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 165eaf6..92c7db7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2976,21 +2976,6 @@ sub vm_start {
warn $@ if $@;
}
 
-   # always set migrate speed (overwrite kvm default of 32m)
-   # we set a very hight default of 8192m which is basically unlimited
-   my $migrate_speed = $defaults-{migrate_speed} || 8192;
-   $migrate_speed = $conf-{migrate_speed} || $migrate_speed;
-   $migrate_speed = $migrate_speed * 1048576;
-   eval {
-   vm_mon_cmd_nocheck($vmid, migrate_set_speed, value = 
$migrate_speed);
-   };
-
-   my $migrate_downtime = $defaults-{migrate_downtime};
-   $migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
-   if (defined($migrate_downtime)) {
-   eval { vm_mon_cmd_nocheck($vmid, migrate_set_downtime, value = 
$migrate_downtime); };
-   }
-
if($migratedfrom) {
my $capabilities = {};
$capabilities-{capability} =  xbzrle;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] fix setting migration parameters V3

2012-12-27 Thread Alexandre DERUMIER
I have resent patches splitted.


note that with migrate_downtime = 1s, I got around 1500ms of real downtime

with default value of 30ms, I got around 500ms of real downtime


(the 500ms of overhead seem to be fixed in last qemu git)


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre Derumier aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 08:52:42 
Objet: Re: [pve-devel] fix setting migration parameters V3 

Hi, 

could you please resend your patch? i can't find it. I wanted to look 
what the extended new values show during migration. 

Thanks! 

Stefan Priebe 


Am 27.12.2012 06:45, schrieb Alexandre Derumier: 
 this is a V3 rework of stefan patch. 
 
 main change: remove default value of migrate_downtime, so will use qemu value 
 of 30ms. 
 
 tested with youtube video HD, downtime is around 500 ms, else if default 
 target is 30ms. 
 
 
 
 ___ 
 pve-devel mailing list 
 pve-devel@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] test: increase migrate_set_downtime only if expected_downtime have more than 30 iterations 0

2012-12-27 Thread Alexandre Derumier
This is a test attempt. (apply on top on 3 others patchs).

The idea is to use default 30ms qemu downtime value for migration.

If the expected_downtime is more than 30 iterations  0  (could be polished, 
maybe some average stats can be better),
then it's like a never ending migration, so we set to the migrate_set_downtime 
to the 1s default value. (maybe can we use the expected_downtime average as 
target value?)

So, We could get the lowest downtime if the memory workload can handle it, 
and for vm with big memory transfert, we upgrade the the value.

This could also help to avoid hang of the monitor(until we set 
migrate_set_downtime?)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] increase migrate_set_downtime only if expected downtime is more than 30 iterations 0

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |   24 ++--
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index dbbeb69..aeb6deb 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -341,16 +341,6 @@ sub phase2 {
 };
 $self-log('info', migrate_set_speed error: $@) if $@;
 
-my $migrate_downtime = $defaults-{migrate_downtime};
-$migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
-if (defined($migrate_downtime)) {
-   $self-log('info', migrate_set_downtime: $migrate_downtime);
-   eval {
-   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, migrate_set_downtime, 
value = int($migrate_downtime));
-   };
-   $self-log('info', migrate_set_downtime error: $@) if $@;
-}
-
 my $capabilities = {};
 $capabilities-{capability} =  xbzrle;
 $capabilities-{state} = JSON::false;
@@ -375,6 +365,8 @@ sub phase2 {
 my $usleep = 200;
 my $i = 0;
 my $err_count = 0;
+my $expecteddowntimecounter = 0;
+
 while (1) {
$i++;
my $avglstat = $lstat/$i if $lstat;
@@ -423,6 +415,7 @@ sub phase2 {
my $xbzrlecachemiss = $stat-{xbzrle-cache}-{cache-miss} 
|| 0;
my $xbzrleoverflow = $stat-{xbzrle-cache}-{overflow} || 0;
my $expected_downtime = $stat-{expected-downtime} || 0;
+   $expecteddowntimecounter++ if $expected_downtime  0;
 
#reduce sleep if remainig memory if lower than the everage 
transfert 
$usleep = 30 if $avglstat  $rem  $avglstat;
@@ -431,6 +424,17 @@ sub phase2 {
   remaining ${rem}), total ${total}, expected 
downtime ${expected_downtime}));
 
#$self-log('info', migration xbzrle cachesize: 
${xbzrlecachesize} transferred ${xbzrlebytes} pages ${xbzrlepages} cachemiss 
${xbzrlecachemiss} overflow ${xbzrleoverflow});
+
+   my $migrate_downtime = $defaults-{migrate_downtime};
+   $migrate_downtime = $conf-{migrate_downtime} if 
defined($conf-{migrate_downtime});
+   if (defined($migrate_downtime)  $expecteddowntimecounter == 
30) {
+   $self-log('info', migrate_set_downtime: 
$migrate_downtime);
+   eval {
+   PVE::QemuServer::vm_mon_cmd_nocheck($vmid, 
migrate_set_downtime, value = int($migrate_downtime));
+   };
+   $self-log('info', migrate_set_downtime error: $@) if $@;
+}
+
}
 
$lstat = $stat-{ram}-{transferred};
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER

Sure you mean just the output in the web GUI? 
yes

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 11:11:56 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Sure you mean just the output in the web GUI? 

Stefan 

Am 27.12.2012 um 10:29 schrieb Alexandre DERUMIER aderum...@odiso.com: 

 Can you send a migration log ? 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 10:26:45 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Ok was just an idea. To me also 0.3 does not work but 0.03 works ;-( also 
 limit bandwidth to 500mb does not help. 
 
 Stefan 
 
 Am 27.12.2012 um 10:09 schrieb Alexandre DERUMIER aderum...@odiso.com: 
 
 for me expected downtime is 0 until the end of the migration. 
 
 here a sample log, using default 30ms downtime. 
 
 Dec 27 07:24:06 starting migration of VM 9 to node 'kvmtest2' 
 (10.3.94.47) 
 Dec 27 07:24:06 copying disk images 
 Dec 27 07:24:06 starting VM 9 on remote node 'kvmtest2' 
 Dec 27 07:24:08 starting migration tunnel 
 Dec 27 07:24:09 starting online/live migration on port 6 
 Dec 27 07:24:09 migrate_set_speed: 8589934592 
 Dec 27 07:24:11 migration status: active (transferred 66518837, remaining 
 8314994688), total 8397455360, expected downtime 0) 
 Dec 27 07:24:13 migration status: active (transferred 121753397, remaining 
 8259760128), total 8397455360, expected downtime 0) 
 Dec 27 07:24:15 migration status: active (transferred 171867087, remaining 
 7475191808), total 8397455360, expected downtime 0) 
 Dec 27 07:24:17 migration status: active (transferred 178976948, remaining 
 4921823232), total 8397455360, expected downtime 0) 
 Dec 27 07:24:19 migration status: active (transferred 227210472, remaining 
 4726611968), total 8397455360, expected downtime 0) 
 Dec 27 07:24:21 migration status: active (transferred 282889143, remaining 
 4361879552), total 8397455360, expected downtime 0) 
 Dec 27 07:24:23 migration status: active (transferred 345327372, remaining 
 4270788608), total 8397455360, expected downtime 0) 
 Dec 27 07:24:25 migration status: active (transferred 407383430, remaining 
 4185169920), total 8397455360, expected downtime 0) 
 Dec 27 07:24:27 migration status: active (transferred 469084514, remaining 
 3742027776), total 8397455360, expected downtime 0) 
 Dec 27 07:24:29 migration status: active (transferred 469687094, remaining 
 1273860096), total 8397455360, expected downtime 0) 
 Dec 27 07:24:31 migration status: active (transferred 501247097, remaining 
 79024128), total 8397455360, expected downtime 3893) 
 Dec 27 07:24:33 migration status: active (transferred 532052759, remaining 
 103800832), total 8397455360, expected downtime 139) 
 Dec 27 07:24:35 migration status: active (transferred 593541297, remaining 
 34357248), total 8397455360, expected downtime 85) 
 Dec 27 07:24:35 migration status: active (transferred 603842750, remaining 
 37982208), total 8397455360, expected downtime 44) 
 Dec 27 07:24:36 migration status: active (transferred 612899069, remaining 
 28667904), total 8397455360, expected downtime 44) 
 Dec 27 07:24:36 migration status: active (transferred 623036734, remaining 
 30404608), total 8397455360, expected downtime 43) 
 Dec 27 07:24:36 migration status: active (transferred 632519102, remaining 
 28622848), total 8397455360, expected downtime 38) 
 Dec 27 07:24:36 migration status: active (transferred 638048739, remaining 
 26222592), total 8397455360, expected downtime 33) 
 Dec 27 07:24:37 migration speed: 285.71 MB/s - downtime 648 ms 
 Dec 27 07:24:37 migration status: completed 
 Dec 27 07:24:42 migration finished successfuly (duration 00:00:36) 
 TASK OK 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre Derumier aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 10:03:39 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Hi, 
 
 to me the whole VM stalls when the new expected downtime is 0. (64bit VM 
 4GB Mem 1GB in use VM totally IDLE). 
 
 That's why a low migration_downtime value help for me as qemu does no 
 longer believe that the expected downtime is 0. 
 
 Greets, 
 Stefan 
 
 Am 27.12.2012 09:18, schrieb Alexandre Derumier: 
 
 Signed-off-by: Alexandre Derumier aderum...@odiso.com 
 --- 
 PVE/QemuMigrate.pm | 6 -- 
 1 file changed, 4 insertions(+), 2 deletions(-) 
 
 diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm 
 index 282cbc5..38f1d05 100644 
 --- a/PVE/QemuMigrate.pm 
 +++ b/PVE/QemuMigrate.pm 
 @@ -402,7 +402,8 @@ sub phase2 { 
 my $delay = time() - $start; 
 if ($delay  0) { 
 my $mbps

Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER
The same again with 1GB Memory used (cached mem): 
Dec 27 12:57:11 starting migration of VM 105 to node 'cloud1-1202' 
(10.255.0.20) 
Dec 27 12:57:11 copying disk images 
Dec 27 12:57:11 starting VM 105 on remote node 'cloud1-1202' 
Dec 27 12:57:15 starting migration tunnel 
Dec 27 12:57:15 starting online/live migration on port 6 
Dec 27 12:57:15 migrate_set_speed: 8589934592 
Dec 27 12:57:15 migrate_set_downtime: 1 
Dec 27 12:58:45 migration speed: 22.76 MB/s - downtime 90004 ms 
Dec 27 12:58:45 migration status: completed 
Dec 27 12:58:49 migration finished successfuly (duration 00:01:38) 
TASK OK 

damn, 9ms of downtime, that's crazy.

It's like it's trying to finish the migration direcly, sending the whole 1GB 
memory in 1 pass.

I think the monitor is blocked because it's blocked also for me at the end of 
the migration (but some ms, not 9ms)

Sound like a bug somewhere is qemu 




- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 13:00:47 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 
Am 27.12.2012 11:21, schrieb Alexandre DERUMIER: 
 
 Sure you mean just the output in the web GUI? 
 yes 

Output with latest qemu-server and latest pve-qemu-kvm. 

VM with 4GB Mem and 100MB used (totally IDLE): 
Dec 27 12:55:46 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22) 
Dec 27 12:55:46 copying disk images 
Dec 27 12:55:46 starting VM 105 on remote node 'cloud1-1203' 
Dec 27 12:55:48 starting migration tunnel 
Dec 27 12:55:49 starting online/live migration on port 6 
Dec 27 12:55:49 migrate_set_speed: 8589934592 
Dec 27 12:55:49 migrate_set_downtime: 1 
Dec 27 12:55:51 migration speed: 1024.00 MB/s - downtime 1534 ms 
Dec 27 12:55:51 migration status: completed 
Dec 27 12:55:54 migration finished successfuly (duration 00:00:09) 
TASK OK 

The same again with 1GB Memory used (cached mem): 
Dec 27 12:57:11 starting migration of VM 105 to node 'cloud1-1202' 
(10.255.0.20) 
Dec 27 12:57:11 copying disk images 
Dec 27 12:57:11 starting VM 105 on remote node 'cloud1-1202' 
Dec 27 12:57:15 starting migration tunnel 
Dec 27 12:57:15 starting online/live migration on port 6 
Dec 27 12:57:15 migrate_set_speed: 8589934592 
Dec 27 12:57:15 migrate_set_downtime: 1 
Dec 27 12:58:45 migration speed: 22.76 MB/s - downtime 90004 ms 
Dec 27 12:58:45 migration status: completed 
Dec 27 12:58:49 migration finished successfuly (duration 00:01:38) 
TASK OK 

VM was halted between and no output of stats where done as the monitor 
was blocked. 

The same again with 1GB Memory and migrate_downtime set to 0.03 (cached 
mem): 
Dec 27 13:00:19 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22) 
Dec 27 13:00:19 copying disk images 
Dec 27 13:00:19 starting VM 105 on remote node 'cloud1-1203' 
Dec 27 13:00:22 starting migration tunnel 
Dec 27 13:00:23 starting online/live migration on port 6 
Dec 27 13:00:23 migrate_set_speed: 8589934592 
Dec 27 13:00:23 migrate_set_downtime: 0.03 
Dec 27 13:00:25 migration status: active (transferred 404647386, 
remaining 680390656), total 2156265472) , expected downtime 190 
Dec 27 13:00:27 migration status: active (transferred 880582320, 
remaining 203579392), total 2156265472) , expected downtime 53 
Dec 27 13:00:29 migration speed: 341.33 MB/s - downtime 490 ms 
Dec 27 13:00:29 migration status: completed 
Dec 27 13:00:32 migration finished successfuly (duration 00:00:13) 
TASK OK 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER
The problem is the LATEST git qemu code they've changed a LOT of include 
file locations so nearly NO PVE patch applies... 

I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches

fr-ca-keymap-corrections.diff
fairsched.diff
pve-auth.patch
vencrypt-auth-plain.patch
enable-kvm-by-default.patch

should be enough to connect with vnc and test migration

I'll keep in touch



- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 13:40:24 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 
Am 27.12.2012 13:39, schrieb Alexandre DERUMIER: 
 not right now - but i tested this yesterday and didn't saw a difference 
 so i moved again to 3.6.11. 
 It'll do test with a 3.6 kernel too, to see if I have a difference 

Thanks! Will retest with pve kernel too. 

 Do you have tried with last qemu 1.4 git ? 
 Because I'm looking into the code, and the change in migration code is really 
 huge. 
 So we could known if it's a qemu migration code problem or not... 

The problem is the LATEST git qemu code they've changed a LOT of include 
file locations so nearly NO PVE patch applies... 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 5/8] nexenta: has_feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/NexentaPlugin.pm |   14 ++
 1 file changed, 14 insertions(+)

diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index 386656f..9622548 100644
--- a/PVE/Storage/NexentaPlugin.pm
+++ b/PVE/Storage/NexentaPlugin.pm
@@ -380,4 +380,18 @@ sub volume_snapshot_delete {
 nexenta_request($scfg, 'destroy', 'snapshot', 
$scfg-{pool}/$volname\@$snap, '');
 }
 
+sub volume_has_feature {
+my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+my $features = {
+snapshot = { current = 1, snap = 1},
+clone = { snap = 1},
+};
+
+my $snap = $snapname ? 'snap' : 'current';
+return 1 if $features-{$feature}-{$snap};
+
+return undef;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 8/8] iscsidirect : has_feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/ISCSIDirectPlugin.pm |6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index e2490e8..b648fd5 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -208,4 +208,10 @@ sub volume_snapshot_delete {
 die volume snapshot delete is not possible on iscsi device;
 }
 
+sub volume_has_feature {
+my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+return undef;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 7/8] iscsi : has_feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/ISCSIPlugin.pm |6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index 173ca1d..ac8384b 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -383,5 +383,11 @@ sub volume_resize {
 die volume resize is not possible on iscsi device;
 }
 
+sub volume_has_feature {
+my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+return undef;
+}
+
 
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] enable snapshot button only if vm has snapshot feature

2012-12-27 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 www/manager/qemu/SnapshotTree.js |   17 +
 1 file changed, 17 insertions(+)

diff --git a/www/manager/qemu/SnapshotTree.js b/www/manager/qemu/SnapshotTree.js
index 0fd1b82..f98c849 100644
--- a/www/manager/qemu/SnapshotTree.js
+++ b/www/manager/qemu/SnapshotTree.js
@@ -71,6 +71,20 @@ Ext.define('PVE.qemu.SnapshotTree', {
me.load_task.delay(me.load_delay);
}
});
+
+PVE.Utils.API2Request({
+   url: '/nodes/' + me.nodename + '/qemu/' + me.vmid + '/feature',
+   params: { feature: 'snapshot' },
+method: 'GET',
+success: function(response, options) {
+var res = response.result.data;
+   if (res === 1) {
+  Ext.getCmp('snapshotBtn').enable();
+   }
+}
+});
+
+
 },
 
 initComponent: function() {
@@ -94,6 +108,7 @@ Ext.define('PVE.qemu.SnapshotTree', {
return record  record.data  record.data.name 
record.data.name !== 'current';
};
+
var valid_snapshot_rollback = function(record) {
return record  record.data  record.data.name 
record.data.name !== 'current'  !record.data.snapstate;
@@ -193,7 +208,9 @@ Ext.define('PVE.qemu.SnapshotTree', {
});
 
var snapshotBtn = Ext.create('Ext.Button', { 
+   id: 'snapshotBtn',
text: gettext('Take Snapshot'),
+   disabled: true,
handler: function() {
var win = Ext.create('PVE.window.Snapshot', { 
nodename: me.nodename,
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : add has_feature sub to detect storage features

2012-12-27 Thread Alexandre DERUMIER
Thanks ! 


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 07:56:22 
Objet: RE: [pve-devel] qemu-server : add has_feature sub to detect storage 
features 

applied. 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier 
 Sent: Donnerstag, 27. Dezember 2012 16:07 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] qemu-server : add has_feature sub to detect storage 
 features 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-27 Thread Alexandre DERUMIER
To my last mails nobody answered... 

I have reply to the mail, adding more infos, because I think it was not enough 
detailled.
I also put paolo bonzini and juan quintala (the migration code author) in copy.


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Jeudi 27 Décembre 2012 19:38:49 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 

Am 27.12.2012 16:21, schrieb Alexandre DERUMIER: 
 But if it's work fine for you with 1s migrate_downtime, we need to find where 
 the problem is in the current qemu 1.3 code ... (maybe qemu mailing can help) 
To my last mails nobody answered... 

Stefan 


 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 15:19:41 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Strangely the status of the VM is always paused after migration. 
 
 Stefan 
 Am 27.12.2012 15:18, schrieb Stefan Priebe: 
 Hi, 
 
 have now done the same. 
 
 With current git qemu migration is really fast using 1,6GB memory: 
 Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202' 
 (10.255.0.20) 
 Dec 27 15:17:45 copying disk images 
 Dec 27 15:17:45 starting VM 105 on remote node 'cloud1-1202' 
 Dec 27 15:17:48 starting online/live migration on tcp:10.255.0.20:6 
 Dec 27 15:17:48 migrate_set_speed: 8589934592 
 Dec 27 15:17:48 migrate_set_downtime: 0.05 
 Dec 27 15:17:52 migration speed: 512.00 MB/s - downtime 174 ms 
 Dec 27 15:17:52 migration status: completed 
 Dec 27 15:17:53 migration finished successfuly (duration 00:00:09) 
 TASK OK 
 
 It's so fast that i can't check if i see that's while migrating. 
 
 Greets, 
 Stefan 
 
 Am 27.12.2012 14:26, schrieb Alexandre DERUMIER: 
 The problem is the LATEST git qemu code they've changed a LOT of 
 include 
 file locations so nearly NO PVE patch applies... 
 
 I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches 
 
 fr-ca-keymap-corrections.diff 
 fairsched.diff 
 pve-auth.patch 
 vencrypt-auth-plain.patch 
 enable-kvm-by-default.patch 
 
 should be enough to connect with vnc and test migration 
 
 I'll keep in touch 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
 Envoyé: Jeudi 27 Décembre 2012 13:40:24 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Hi, 
 Am 27.12.2012 13:39, schrieb Alexandre DERUMIER: 
 not right now - but i tested this yesterday and didn't saw a 
 difference 
 so i moved again to 3.6.11. 
 It'll do test with a 3.6 kernel too, to see if I have a difference 
 
 Thanks! Will retest with pve kernel too. 
 
 Do you have tried with last qemu 1.4 git ? 
 Because I'm looking into the code, and the change in migration code 
 is really huge. 
 So we could known if it's a qemu migration code problem or not... 
 
 The problem is the LATEST git qemu code they've changed a LOT of include 
 file locations so nearly NO PVE patch applies... 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-28 Thread Alexandre DERUMIER
I'm looking into qemu src code,
in arch_init.c - ram_save_iterate()

it should return false, to retry transfert iteration of remaining memory  , 
until the last step:ram_save_complete()

Seem that for Stefan, it's return 0 directly with migrate_set_downtime = 1.

I think interesting part is here:

Maybe can we add somes logs ? (BTW, is it possible to logs qemu STDOUT 
somewhere in a file ?)



bwidth = qemu_get_clock_ns(rt_clock) - bwidth;
bwidth = (bytes_transferred - bytes_transferred_last) / bwidth;

/* if we haven't transferred anything this round, force
 * expected_downtime to a very high value, but without
 * crashing */
if (bwidth == 0) {
bwidth = 0.01;
}

qemu_put_be64(f, RAM_SAVE_FLAG_EOS);

expected_downtime = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
DPRINTF(ram_save_live: expected(% PRIu64 ) = max( PRIu64 )?\n,
expected_downtime, migrate_max_downtime());

if (expected_downtime = migrate_max_downtime()) { 
migration_bitmap_sync();
expected_downtime = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
s-expected_downtime = expected_downtime / 100; /* ns - ms */

return expected_downtime = migrate_max_downtime();
}
return 0;






- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 07:39:50 
Objet: RE: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

  Am 27.12.2012 16:21, schrieb Alexandre DERUMIER: 
  But if it's work fine for you with 1s migrate_downtime, we need to 
  find where the problem is in the current qemu 1.3 code ... (maybe 
  qemu mailing can help) 
  To my last mails nobody answered... 
  
  What information do you miss (what last mails?)? 
 Last mails to qemu mailing list. It was regarding my migration problems. 

Ah, yes. I will do further tests today to reproduce the bug here. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] pve-storage : dynamic plugin loading ?

2012-12-28 Thread Alexandre DERUMIER
Hi Dietmar,

Do you think it's possible to add dynamic plugin loading in pve-storage ?

instead:
use PVE::Storage::Plugin;
use PVE::Storage::DirPlugin;
use PVE::Storage::LVMPlugin;
use PVE::Storage::NFSPlugin;
use PVE::Storage::ISCSIPlugin;
use PVE::Storage::RBDPlugin;
use PVE::Storage::SheepdogPlugin;
use PVE::Storage::ISCSIDirectPlugin;
use PVE::Storage::NexentaPlugin;
use PVE::Storage::NetappPNFSPlugin;

# load and initialize all plugins
PVE::Storage::DirPlugin-register();
PVE::Storage::LVMPlugin-register();
PVE::Storage::NFSPlugin-register();
PVE::Storage::ISCSIPlugin-register();
PVE::Storage::RBDPlugin-register();
PVE::Storage::SheepdogPlugin-register();
PVE::Storage::ISCSIDirectPlugin-register();
PVE::Storage::NexentaPlugin-register();
PVE::Storage::NetappPNFSPlugin-register();


Having something like (I don't know how to do this in perl ;)

use PVE::Storage::*
PVE::Storage::*-register();


I don't know if this can be a security or performance problem, but it could 
easier to develop some third party plugin (like my netapp plugin),
If you don't have time, or hardware to test it by example.

What do you think about it ?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-storage : dynamic plugin loading ?

2012-12-28 Thread Alexandre DERUMIER
We work full time on pve here. But we also have to do support, maintain the 
repositories, 
do testing, prepare releases, fix bugs, ... 

So I can't promise a fixed date, sorry. 

Yes, Sure I understand, no problem. I can keep working on my seperate branch. 
(Just take time to rebase sometimes)


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 09:44:23 
Objet: RE: [pve-devel] pve-storage : dynamic plugin loading ? 

 No, I do not want that. Instead, I want to include those plugins. 
 
 Ok,Thanks. 
 
 For my own need, I would like to have netapp plugin and template-cloning 
 for the end of January, do you think it'll be ok for you ? 

We work full time on pve here. But we also have to do support, maintain the 
repositories, 
do testing, prepare releases, fix bugs, ... 

So I can't promise a fixed date, sorry. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Alexandre DERUMIER
That looks strange - AES-ni should be much faster? 

Are you sure that squeeze support aes-ni with ssh ? because libssl is quite old




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 11:29:23 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

  Anyways, how much faster is it compared to AESl hardware chipher? 
 
 42 Mb/s ssh 
 76 Mb/s ssh aes-ni 

That looks strange - AES-ni should be much faster? 

Just did a quick search, and people claim to get  500MB/s 

for example: 

http://datacenteroverlords.com/2011/09/07/aes-ni-pimp-your-aes/ 

So whats wrong? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Alexandre DERUMIER
Are you sure that squeeze support aes-ni with ssh ? because libssl is quite 
old 
See here:
Add support for AES-NI (Package: libssl1.0.0)
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=644743

- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 11:33:35 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

That looks strange - AES-ni should be much faster? 

Are you sure that squeeze support aes-ni with ssh ? because libssl is quite old 




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 11:29:23 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

  Anyways, how much faster is it compared to AESl hardware chipher? 
 
 42 Mb/s ssh 
 76 Mb/s ssh aes-ni 

That looks strange - AES-ni should be much faster? 

Just did a quick search, and people claim to get  500MB/s 

for example: 

http://datacenteroverlords.com/2011/09/07/aes-ni-pimp-your-aes/ 

So whats wrong? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2012-12-28 Thread Alexandre DERUMIER
if we can get 700MB/s there must be a bug somewhere? Maybe you 
can ask on the ssh mailing lists? 
Do you have the aes ni intel module loaded ? (#modprobe aesni-intel )


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 28 Décembre 2012 13:09:16 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

 This is just memory openssl speed. There i'm getting 600-700MB/s: 
 The 'numbers' are in 1000s of bytes per second processed. 
 type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 
 bytes 
 aes-128-cbc 648664.33k 688924.90k 695855.45k 700784.64k 
 704027.06k 
 
 But scp / ssh just boosts from 42MB/s to 76Mb/s. 

if we can get 700MB/s there must be a bug somewhere? Maybe you 
can ask on the ssh mailing lists? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-29 Thread Alexandre DERUMIER
oh, ok.

I'll tests the patchs mondays.
Does migration works fine for you with them ? 


- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Samedi 29 Décembre 2012 14:55:33 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi Alexandre, 
Am 29.12.2012 14:51, schrieb Alexandre DERUMIER: 
 Great to known that you finally found it !. 
Not found ;-) just know that the rework of the migration code fixes it. 
May be it might be a deadlock - they've changed the locking and 
threading handling. 

(Do you have respond to Paolo Bonzini, because the last response I see was 
that it was not working). 
No - i just tried to apply parts of the patchset on qemu 1.3 instead of 
using whole 1.4 git trunk. 

 Alexandre might you try my rebased patches on top of pve-qemu 1.3? 
 So, Do you want to apply the first patch only (to fix you bug), or apply the 
 whole patches set ? (equal to current git code ?). 
All patches i had attached ;-) 

 Not sure that Dietmar want to apply the big patches set ;) 
sure but just wanted to know if they're stable for you too. So if 
they're i will care about them in my own branch. So I've at least a 
working migration. 

 I'll retest the whole patches set, because I was having migration problem (vm 
 paused with error) with last qemu git. 
That's why i've ported not all patches and on top of qemu 1.3 instead of 
qemu 1.4. 

Stefan 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 3/3] add downtime expected_downtime query-migrate info

2012-12-29 Thread Alexandre DERUMIER
I have just done some little tests with qemu 1.3 + migration patches set,

now it's working fine, no more crash ! (playing video hd).

setting migrate_set_downtime to 1sec, give me:

for no memory activity : 36ms downtime

for high memory activity : 650ms downtime


I'll do more extensive tests monday.

Thanks for the patches !



- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Samedi 29 Décembre 2012 15:07:30 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
query-migrate info 

Hi, 

Am 29.12.2012 15:02, schrieb Alexandre DERUMIER: 
 oh, ok. 
 
 I'll tests the patchs mondays. 
Thanks! 

 Does migration works fine for you with them ? 
Yes - it works fine / perfectly. But i've no VM with a desktop to play 
HD videos ;-) 

Stefan 

 - Mail original - 
 
 De: Stefan Priebe s.pri...@profihost.ag 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
 Envoyé: Samedi 29 Décembre 2012 14:55:33 
 Objet: Re: [pve-devel] [PATCH 3/3] add downtime  expected_downtime 
 query-migrate info 
 
 Hi Alexandre, 
 Am 29.12.2012 14:51, schrieb Alexandre DERUMIER: 
 Great to known that you finally found it !. 
 Not found ;-) just know that the rework of the migration code fixes it. 
 May be it might be a deadlock - they've changed the locking and 
 threading handling. 
 
 (Do you have respond to Paolo Bonzini, because the last response I see was 
 that it was not working). 
 No - i just tried to apply parts of the patchset on qemu 1.3 instead of 
 using whole 1.4 git trunk. 
 
 Alexandre might you try my rebased patches on top of pve-qemu 1.3? 
 So, Do you want to apply the first patch only (to fix you bug), or apply the 
 whole patches set ? (equal to current git code ?). 
 All patches i had attached ;-) 
 
 Not sure that Dietmar want to apply the big patches set ;) 
 sure but just wanted to know if they're stable for you too. So if 
 they're i will care about them in my own branch. So I've at least a 
 working migration. 
 
 I'll retest the whole patches set, because I was having migration problem 
 (vm paused with error) with last qemu git. 
 That's why i've ported not all patches and on top of qemu 1.3 instead of 
 qemu 1.4. 
 
 Stefan 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] increase migrate_set_downtime only if expected downtime is more than 30 iterations 0

2012-12-30 Thread Alexandre DERUMIER
What about every 15seconds if no progress has happened? How would 
you define progress? 

Maybe we can check if remaining memory goes up and down.
(So it could also works with qemu 1.4 as we dont have expected_downtime value)

(if remaining memory  last remaing memory){
 counter++
}

if(counter == 15){

 set_migrate_downtime = migratedowntime + 0.X sec
 counter = 0;
}










- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Dietmar Maurer diet...@proxmox.com 
Cc: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Dimanche 30 Décembre 2012 09:33:09 
Objet: Re: [pve-devel] [PATCH] increase migrate_set_downtime only if expected 
downtime is more than 30 iterations  0 

What about every 15seconds if no progress has happened? How would 
you define progress? 

I would then start at 0.15s downtime as 
default or at custom conf value if defined. If no progress happens I would 
double the value. 

Stefan 

Am 30.12.2012 um 06:55 schrieb Dietmar Maurer diet...@proxmox.com: 

 i really like that idea instead of having a fixed downtime of 1s. What is 
 about 
 starting at 0.5s and then add 0.5s every 15 runs? 
 
 Yes, we can do that, but may Every 15 runs if we make no progress? 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] increase migrate_set_downtime only if expected downtime is more than 30 iterations 0

2012-12-30 Thread Alexandre DERUMIER
I'm testing your patch.

remaning memory can also stay at 0 at the end (with last qemu git migration 
patch), so we need to also check that.


also in your patch, what is

+   $migrate_downtimecounter = time();

?

- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Dietmar Maurer diet...@proxmox.com 
Envoyé: Dimanche 30 Décembre 2012 13:25:40 
Objet: Re: [pve-devel] [PATCH] increase migrate_set_downtime only if expected 
downtime is more than 30 iterations  0 

Hi, 

attached a patch. 

Greets, 
Stefan 

Am 30.12.2012 09:39, schrieb Alexandre DERUMIER: 
 What about every 15seconds if no progress has happened? How would 
 you define progress? 
 
 Maybe we can check if remaining memory goes up and down. 
 (So it could also works with qemu 1.4 as we dont have expected_downtime 
 value) 
 
 (if remaining memory  last remaing memory){ 
 counter++ 
 } 
 
 if(counter == 15){ 
 
 set_migrate_downtime = migratedowntime + 0.X sec 
 counter = 0; 
 } 
 
 
 
 
 
 
 
 
 
 
 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Dietmar Maurer diet...@proxmox.com 
 Cc: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
 Envoyé: Dimanche 30 Décembre 2012 09:33:09 
 Objet: Re: [pve-devel] [PATCH] increase migrate_set_downtime only if expected 
 downtime is more than 30 iterations  0 
 
 What about every 15seconds if no progress has happened? How would 
 you define progress? 
 
 I would then start at 0.15s downtime as 
 default or at custom conf value if defined. If no progress happens I would 
 double the value. 
 
 Stefan 
 
 Am 30.12.2012 um 06:55 schrieb Dietmar Maurer diet...@proxmox.com: 
 
 i really like that idea instead of having a fixed downtime of 1s. What is 
 about 
 starting at 0.5s and then add 0.5s every 15 runs? 
 
 Yes, we can do that, but may Every 15 runs if we make no progress? 
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/2] add set migrate_downtime default value to 0.1 add number type

2012-12-30 Thread Alexandre Derumier
can be integer or float

ex:

1
1.0
0.3

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 10865ad..5c39728 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -382,10 +382,10 @@ EODESCR
 },
 migrate_downtime = {
optional = 1,
-   type = 'integer',
+   type = 'number',
description = Set maximum tolerated downtime (in seconds) for 
migrations.,
minimum = 0,
-   default = 1,
+   default = 0.1,
 },
 cdrom = {
optional = 1,
@@ -1441,6 +1441,9 @@ sub check_type {
 } elsif ($type eq 'integer') {
return int($1) if $value =~ m/^(\d+)$/;
die type check ('integer') failed - got '$value'\n;
+} elsif ($type eq 'number') {
+return $value if $value =~ m/^(\d+)(\.\d+)?$/;
+die type check ('number') failed - got '$value'\n;
 } elsif ($type eq 'string') {
if (my $fmt = $confdesc-{$key}-{format}) {
if ($fmt eq 'pve-qm-drive') {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] fix Bug #293: CDROM size not reset when set to use no media

2012-12-31 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ec0424a..db39c62 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -72,7 +72,8 @@ my $create_disks = sub {
my $volid = $disk-{file};
 
if (!$volid || $volid eq 'none' || $volid eq 'cdrom') {
-   $res-{$ds} = $settings-{$ds};
+   delete $disk-{size};
+   $res-{$ds} = PVE::QemuServer::print_drive($vmid, $disk);
} elsif ($volid =~ m/^(([^:\s]+):)?(\d+(\.\d+)?)$/) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die no storage ID specified (and no default storage)\n if 
!$storeid;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] ceph 0.56 is coming. do we need to keep compatibility with 0.48 ?

2013-01-02 Thread Alexandre DERUMIER
Hi all (and happy new year ;),

Ceph 0.56 stable is coming soon (testing for now, but should go in stable 
branch in coming days).

Do you need to keep compatibility with 0.48 clusters with the storage plugin ? 
(as rbd is not officially supported)


I see 2 new things:

-image format V2. required for cloning. This require to pass format argument 
when we create the volume.

- rbd ls -r
  display all infos of rbd images (size,parent,...) with 1 api call. (currently 
we need to call rbd info for each volume)


So, Do we need to keep compatilibity, and introduce a new storage param like 
version or not ?

Regards,

Alexandre
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] ceph 0.56 is coming. do we need to keep compatibility with 0.48 ?

2013-01-02 Thread Alexandre DERUMIER
No, I guess we do not need to be compatible. 

Ok thanks,
I'll send a patch for rbd ls -r





- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Mercredi 2 Janvier 2013 14:49:46 
Objet: RE: [pve-devel] ceph 0.56 is coming. do we need to keep compatibility 
with 0.48 ? 

 So, Do we need to keep compatilibity 

No, I guess we do not need to be compatible. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/2] create rbd volume with format v2

2013-01-03 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/RBDPlugin.pm |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 1a2e91c..948be0d 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -210,7 +210,7 @@ sub alloc_image {
 die unable to allocate an image name for VM $vmid in storage '$storeid'\n
if !$name;
 
-my $cmd = $rbd_cmd($scfg, $storeid, 'create', '--size', ($size/1024), 
$name);
+my $cmd = $rbd_cmd($scfg, $storeid, 'create', '--format' , 2, '--size', 
($size/1024), $name);
 run_command($cmd, errmsg = rbd create $name' error, errfunc = sub {});
 
 return $name;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/2] rbd: use rbd ls -l

2013-01-03 Thread Alexandre Derumier
avoid to call rbd info for each volume

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/RBDPlugin.pm |9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index bbfae7c..1a2e91c 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -44,20 +44,19 @@ my $rados_cmd = sub {
 sub rbd_ls {
 my ($scfg, $storeid) = @_;
 
-my $cmd = $rbd_cmd($scfg, $storeid, 'ls');
+my $cmd = $rbd_cmd($scfg, $storeid, 'ls', '-l');
 
 my $list = {};
 
 my $parser = sub {
my $line = shift;
 
-   if ($line =~ m/^(vm-(\d+)-\S+)$/) {
-   my ($image, $owner) = ($1, $2);
+   if ($line =~  
m/^(vm-(\d+)-disk-\d+)\s+(\d+)M\s((\S+)\/(vm-\d+-\S+@\S+))?/) {
+   my ($image, $owner, $size, $parent) = ($1, $2, $3, $6);
 
-   my ($size, $parent) = rbd_volume_info($scfg, $storeid, $image);
$list-{$scfg-{pool}}-{$image} = {
name = $image,
-   size = $size,
+   size = $size*1024*1024,
parent = $parent,
vmid = $owner
};
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] balloon: don't set balloon polling interval at start when livemigrate

2013-01-04 Thread Alexandre Derumier
We don't need to set balloon value and polling interval when a vm is coming 
from a livemigrate.
(Values are keep in guest memory)

So with autoballooning, this avoid to set the ballon size at ballon_min value 
when the vm is migrated

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a599219..a7ffb8a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2986,16 +2986,16 @@ sub vm_start {
$capabilities-{state} = JSON::true;
eval { vm_mon_cmd_nocheck($vmid, migrate-set-capabilities, 
capabilities = [$capabilities]); };
}
-
-   # fixme: how do we handle that on migration?
-
-   if (!defined($conf-{balloon}) || $conf-{balloon}) {
-   vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024) 
-   if $conf-{balloon};
-   vm_mon_cmd_nocheck($vmid, 'qom-set', 
-  path = machine/peripheral/balloon0, 
-  property = stats-polling-interval, 
-  value = 2);
+   else{
+
+   if (!defined($conf-{balloon}) || $conf-{balloon}) {
+   vm_mon_cmd_nocheck($vmid, balloon, value = 
$conf-{balloon}*1024*1024) 
+   if $conf-{balloon};
+   vm_mon_cmd_nocheck($vmid, 'qom-set', 
+   path = machine/peripheral/balloon0, 
+   property = stats-polling-interval, 
+   value = 2);
+   }
}
 });
 }
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] skip ballooning during migration

2013-01-04 Thread Alexandre Derumier
avoid to add memory load overhead on the migration with balloon inflate/deflate

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/AutoBalloon.pm |1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/AutoBalloon.pm b/PVE/AutoBalloon.pm
index b461795..62844de 100644
--- a/PVE/AutoBalloon.pm
+++ b/PVE/AutoBalloon.pm
@@ -93,6 +93,7 @@ sub compute_alg1 {
my $d = $vmstatus-{$vmid};
next if !$d-{balloon}; # skip if balloon driver not running
next if !$d-{balloon_min}; # skip if balloon value not set in config
+   next if $d-{lock}   $d-{lock} eq 'migrate'; 
next if defined($d-{shares})  
($d-{shares} == 0); # skip if shares set to zero
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: 1.0rc1

2013-01-06 Thread Alexandre DERUMIER
when I integrate the copy/clone patches from Alexandre.

I'll resend patches this week (after wednesday, I'm a bit busy at work),rebased 
on last git, they are more or less finished now.

Regards,

Alexandre



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com 
Envoyé: Lundi 7 Janvier 2013 06:36:11 
Objet: Re: [pve-devel] Storage migration: 1.0rc1 

 I have finally completed this module:-) 

Many thanks for the patch. I am not sure how that fits into my picture, but I 
will 
take a closer lock when I integrate the copy/clone patches from Alexandre. 

- Dietmar 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Storage migration: 1.0rc1

2013-01-07 Thread Alexandre DERUMIER
How do you initiate qmp from perl? (It uses a REST api, does it not?) 
PVE::QemuServer::vm_mon_cmd(vmid, qmpcommand, arguments);

exemple:
PVE::QemuServer::vm_mon_cmd($vmid, block_resize, device = $deviceid, size = 
int($size));



live migration code is in:
PVE/QemuMigrate.pm


you can find qmp command doc here:
http://git.qemu.org/?p=qemu.git;a=blob_plain;f=qmp-commands.hx;hb=HEAD


- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Lundi 7 Janvier 2013 17:58:56 
Objet: Re: [pve-devel] Storage migration: 1.0rc1 

On Mon, 7 Jan 2013 07:02:34 + 
Dietmar Maurer diet...@proxmox.com wrote: 

 
 I can't see that in your patch (Such thing need to use the qmp block migrate 
 commands). 
 
 
I was not aware of that this was available before 1.4. 

How do you initiate qmp from perl? (It uses a REST api, does it not?) 
Can you point me to an example in some of the promox perl modules? 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
Laugh when you can; cry when you must. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration (setting in datacenter.cfg)

2013-01-07 Thread Alexandre DERUMIER
Again, you can run more than one ssh connection if you want to migrate dozens 
of machines. 

Hi, Maybe it'll be cpu limited with a lot a parallel migration ?



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Eric Blevins e...@netwalk.com, pve-devel@pve.proxmox.com 
Envoyé: Lundi 7 Janvier 2013 17:49:38 
Objet: Re: [pve-devel] [PATCH] qemu-server: add support for unsecure migration 
(setting in datacenter.cfg) 

  So you can transfer an VM with 4GB within 10 seconds? IMHO not that bad. 
 
 But 4 seconds is even better, when you have dozens of machines to migrate 
 seconds add up to minutes. 
 Besides, networks will get faster over time but SSH will still have 
 limitations 
 due to its design. 

Again, you can run more than one ssh connection if you want to migrate dozens 
of machines. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] bump rbd to stable bobtail (0.56.1)

2013-01-07 Thread Alexandre Derumier
require to update proxmox repository with new ceph packages:

http://ceph.com/debian-bobtail/pool/main/c/ceph/librbd-dev_0.56.1-1~bpo60+1_amd64.deb
http://ceph.com/debian-bobtail/pool/main/c/ceph/librbd1_0.56.1-1~bpo60+1_amd64.deb
http://ceph.com/debian-bobtail/pool/main/c/ceph/librados-dev_0.56.1-1~bpo60+1_amd64.deb
http://ceph.com/debian-bobtail/pool/main/c/ceph/librados2_0.56.1-1~bpo60+1_amd64.deb
http://ceph.com/debian-bobtail/pool/main/c/ceph/ceph-common_0.56.1-1~bpo60+1_amd64.deb

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 02/13] add is_template sub

2013-01-08 Thread Alexandre Derumier
return 1 if current is template
return 2 if a snapshot is a template

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   16 
 1 file changed, 16 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1e70c3a..21e4827 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4489,4 +4489,20 @@ sub template_delete {
 
 }
 
+sub is_template {
+my ($conf, $snapname, $checkall) = @_;
+
+return 2 if $snapname  
defined($conf-{snapshots}-{$snapname}-{template});
+return 1 if ($conf-{template}  !$snapname);
+
+if($checkall){
+   my $snaphash = $conf-{snapshots} || {};
+   foreach my $snapname (keys %$snaphash) {
+   return 2 if defined($snaphash-{$snapname}-{template});
+   }
+}
+
+return undef;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 04/13] forbid vm_destroy if templates exists

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8955b7c..59effb6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3266,6 +3266,8 @@ sub vm_destroy {
 
my $conf = load_config($vmid);
 
+   die you can't delete a vm if template exist if is_template($conf, 
undef, 1);
+
check_lock($conf) if !$skiplock;
 
if (!check_running($vmid)) {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 01/13] add template_create and template_delete feature

2013-01-08 Thread Alexandre Derumier
A template is protected config + volumes.
Can be current config or snapshot.

template_create:

-we lock volume if storage need it for clone (files or rbd)
-then we add template:1 to the config (current or snapshot)

template_delete:

- We need to check if clones of volumes exist before remove the template
- We unlock the storage
- we remove template:1 from the config (current of snapshot)

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm  |   78 +
 PVE/QemuServer.pm |   78 +
 qm|2 ++
 3 files changed, 158 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 487fde2..91cf569 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2298,4 +2298,82 @@ __PACKAGE__-register_method({
return $rpcenv-fork_worker('qmdelsnapshot', $vmid, $authuser, 
$realcmd);
 }});
 
+__PACKAGE__-register_method({
+name = 'template',
+path = '{vmid}/template',
+method = 'POST',
+protected = 1,
+proxyto = 'node',
+description = Create a Template.,
+permissions = {
+   check = ['perm', '/vms/{vmid}', [ 'VM.Template' ]],
+},
+parameters = {
+   additionalProperties = 0,
+   properties = {
+   node = get_standard_option('pve-node'),
+   vmid = get_standard_option('pve-vmid'),
+   snapname = get_standard_option('pve-snapshot-name', {
+   optional = 1,
+   }),
+   delete = {
+   optional = 1,
+   type = 'boolean',
+   },
+   },
+},
+returns = { type = 'null'},
+code = sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $authuser = $rpcenv-get_user();
+
+   my $node = extract_param($param, 'node');
+
+   my $vmid = extract_param($param, 'vmid');
+
+   my $snapname = extract_param($param, 'snapname');
+
+   my $delete = extract_param($param, 'delete');
+
+   my $updatefn =  sub {
+
+   my $conf = PVE::QemuServer::load_config($vmid);
+
+   PVE::QemuServer::check_lock($conf);
+
+   if($delete){
+
+   PVE::QemuServer::template_delete($vmid, $conf, $snapname);
+
+   if($snapname){
+   die snapshot '$snapname' don't exist\n if 
!defined($conf-{snapshots}-{$snapname});
+   delete $conf-{snapshots}-{$snapname}-{template};
+   }else{
+   delete $conf-{template};
+   }
+
+   }else{
+
+   PVE::QemuServer::template_create($vmid, $conf, $snapname);
+
+   if($snapname){
+   die snapshot '$snapname' don't exist\n if 
!defined($conf-{snapshots}-{$snapname});
+   $conf-{snapshots}-{$snapname}-{template} = 1;
+   }else{
+   $conf-{template} = 1;
+   }
+   }
+
+   PVE::QemuServer::update_config_nolock($vmid, $conf, 1);
+   };
+
+   PVE::QemuServer::lock_config($vmid, $updatefn);
+   return undef;
+}});
+
+
+
 1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a34c89f..1e70c3a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -181,6 +181,12 @@ my $confdesc = {
description = Lock/unlock the VM.,
enum = [qw(migrate backup snapshot rollback)],
 },
+template = {
+   optional = 1,
+   type = 'boolean',
+   description = Current config or snapshot is a template,
+   default = 0,
+},
 cpulimit = {
optional = 1,
type = 'integer',
@@ -4411,4 +4417,76 @@ sub has_feature{
 
 return 1 if !$err;
 }
+
+sub template_create {
+my ($vmid, $conf, $snapname) = @_;
+
+die snapshot $snapname don't exist if ($snapname  
(!defined($conf-{snapshots}-{$snapname})));
+
+die you can't create a template without snapshot on a running vm if 
check_running($vmid)  !$snapname;
+
+die you can't create a template from a snapshot with a vmstate if 
$snapname  $conf-{snapshots}-{$snapname}-{vmstate};
+
+$conf = $conf-{snapshots}-{$snapname} if $snapname;
+
+my $storecfg = PVE::Storage::config();
+
+foreach_drive($conf, sub {
+   my ($ds, $drive) = @_;
+
+   return if drive_is_cdrom($drive);
+
+   my $volid = $drive-{file};
+   my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
+   if ($storeid) {
+my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+PVE::Storage::volume_protect($storecfg, $volid, $snapname, 1);
+   }
+});
+
+}
+
+sub template_delete {
+my ($vmid, $conf, $snapname) = @_;
+
+$conf = $conf-{snapshots}-{$snapname} if $snapname;
+my $storecfg = PVE::Storage::config();
+
+#search if clone child of template disks exist
+foreach_drive($conf, sub {
+   my ($ds, $drive) = @_;
+
+   return if drive_is_cdrom($drive);
+
+   my

[pve-devel] [PATCH 03/13] forbid snapshot delete if it's a template

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |4 
 1 file changed, 4 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 21e4827..8955b7c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2908,6 +2908,10 @@ sub qemu_volume_snapshot {
 sub qemu_volume_snapshot_delete {
 my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
 
+my $conf = PVE::QemuServer::load_config($vmid);
+
+die you can't delete a snapshot if it's a template if is_template($conf, 
$snap);
+
 my $running = check_running($vmid);
 
 return if !PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 
$running);
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-server : template clone final V1

2013-01-08 Thread Alexandre Derumier
Hi All,
This is the final v1 of my template  clone patches series.

I have done extensive tests, with differents storage 
(raw,qcow2,rbd,sheepdog,...)
All seem to works fine without bug

Please test and review !



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 06/13] forbid vm_start if current config is a template.

2013-01-08 Thread Alexandre Derumier
if files (raw,qcow2) are a template, we forbid vm_start.

note : the readonly protection do already the job, but we need a clear message 
for users

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0835652..8d6979a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2939,6 +2939,8 @@ sub vm_start {
 lock_config($vmid, sub {
my $conf = load_config($vmid, $migratedfrom);
 
+   die you can't start a vm if current is a template if 
is_template($conf);
+
check_lock($conf) if !$skiplock;
 
die VM $vmid already running\n if check_running($vmid, undef, 
$migratedfrom);
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 05/13] forbid rollback if current config is a template.

2013-01-08 Thread Alexandre Derumier
if a qcow2 current is a template, we can't rollback to a previous snapshot.

(note that file readonly protection do already the job, but we need a clear 
error message for user)

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 59effb6..0835652 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4174,6 +4174,8 @@ sub snapshot_rollback {
 
my $conf = load_config($vmid);
 
+   die you can't rollback if current is a template if is_template($conf);
+
$snap = $conf-{snapshots}-{$snapname};
 
die snapshot '$snapname' does not exist\n if !defined($snap); 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 07/13] forbid offline migration of a non shared volume if it's a clone

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuMigrate.pm |6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index de2ee57..2b79025 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -245,6 +245,12 @@ sub sync_disks {
 
die can't migrate '$volid' - storagy type '$scfg-{type}' not 
supported\n
if $scfg-{type} ne 'dir';
+
+   #if file, check if a backing file exist
+   if(($scfg-{type} eq 'dir')  (!$sharedvm)){
+   my (undef, undef, undef, $parent) = 
PVE::Storage::volume_size_info($self-{storecfg}, $volid, 1);
+   die can't migrate '$volid' as it's a clone of '$parent';
+   }
}
 
foreach my $volid (keys %$volhash) {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 09/13] forbid snapshot create if current it's a template

2013-01-08 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |4 
 1 file changed, 4 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8d6979a..5e1f319 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2895,6 +2895,10 @@ sub qemu_block_resize {
 sub qemu_volume_snapshot {
 my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
 
+my $conf = PVE::QemuServer::load_config($vmid);
+
+die you can't take a snapshot if it's a template if is_template($conf);
+
 my $running = check_running($vmid);
 
 return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 
$running);
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 10/13] add clone_disks

2013-01-08 Thread Alexandre Derumier
linked clone

qm create vmid --clonefrom vmidsrc [--snapname snap] [--clonemode clone]

copy clone : dest storage = source storage
--
qm create vmid --clonefrom vmidsrc [--snapname snap] --clonemode copy

copy clone : dest storage != source storage
--
qm create vmid --clonefrom vmidsrc [--snapname snap] --clonemode copy --virtio0 
storeid:[fmt]   (--virtio0 local:raw --virtio1 rbdstorage: 
--virtio2:nfsstorage:qcow2)

others config params can be add
---

qm create vmid --clonefrom vmidsrc [--snapname snap] [--clonemode clone] 
--memory 2048 --name newvmname

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm |  164 +++---
 1 file changed, 156 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 63dbd33..c8cb9f3 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -79,6 +79,7 @@ my $create_disks = sub {
die no storage ID specified (and no default storage)\n if 
!$storeid;
my $defformat = PVE::Storage::storage_default_format($storecfg, 
$storeid);
my $fmt = $disk-{format} || $defformat;
+
my $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid,
  $fmt, undef, $size*1024*1024);
$disk-{file} = $volid;
@@ -133,6 +134,83 @@ my $create_disks = sub {
 return $vollist;
 };
 
+my $clone_disks = sub {
+my ($rpcenv, $authuser, $conf, $storecfg, $vmid, $pool, $settings, $snap, 
$mode) = @_;
+
+my $vollist = [];
+my $voliddst = undef;
+
+my $res = {};
+PVE::QemuServer::foreach_drive($conf, sub {
+   my ($ds, $disk) = @_;
+
+   my $volid = $disk-{file};
+   if (PVE::QemuServer::drive_is_cdrom($disk)) {
+   $res-{$ds} = PVE::QemuServer::print_drive($vmid, $disk);
+   } else{
+
+   if($mode eq 'clone'){
+   $voliddst = PVE::Storage::volume_clone($storecfg, $volid, 
$snap, $vmid);
+   push @$vollist, $voliddst;
+
+   }elsif($mode eq 'copy'){
+
+   my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 
1);
+   die no storage ID specified (and no default storage)\n if 
!$storeid;
+
+   my $fmt = undef;
+   if ($volname =~ m/\.(raw|qcow2|vmdk)$/){
+   $fmt = $1;
+   }
+
+   #target storage is different ? (ex: -virtio0:storeid:fmt)
+   if($settings-{$ds}  $settings-{$ds} =~ 
m/^(\S+):(raw|qcow2|vmdk)?$/){
+   ($storeid, $fmt) = ($1, $2);
+   }
+
+   $rpcenv-check($authuser, /storage/$storeid, 
['Datastore.AllocateSpace']);
+
+   PVE::Storage::activate_volumes($storecfg, [ $volid ]);
+
+   my ($size) = PVE::Storage::volume_size_info($storecfg, $volid, 
1);
+
+   $voliddst = PVE::Storage::vdisk_alloc($storecfg, $storeid, 
$vmid,
+  $fmt, undef, ($size/1024));
+   push @$vollist, $voliddst;
+
+   PVE::Storage::activate_volumes($storecfg, [ $voliddst ]);
+
+   #copy from source
+   PVE::QemuServer::qemu_img_convert($volid, $voliddst, $snap);
+
+   }
+
+   $disk-{file} = $voliddst;
+   $disk-{size} = PVE::Storage::volume_size_info($storecfg, 
$voliddst, 1);
+
+   delete $disk-{format}; # no longer needed
+   $res-{$ds} = PVE::QemuServer::print_drive($vmid, $disk);
+   }
+});
+
+# free allocated images on error
+if (my $err = $@) {
+   syslog('err', VM $vmid creating disks failed);
+   foreach my $volid (@$vollist) {
+   eval { PVE::Storage::vdisk_free($storecfg, $volid); };
+   warn $@ if $@;
+   }
+   die $err;
+}
+
+# modify vm config if everything went well
+foreach my $ds (keys %$res) {
+   $conf-{$ds} = $res-{$ds};
+}
+
+return $vollist;
+};
+
 my $check_vm_modify_config_perm = sub {
 my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -257,6 +335,21 @@ __PACKAGE__-register_method({
type = 'string', format = 'pve-poolid',
description = Add the VM to the specified pool.,
},
+clonefrom = get_standard_option('pve-vmid', {
+description = Template Vmid.,
+optional = 1,
+}),
+   clonemode = {
+   description = clone|copy,
+   type = 'string',
+   enum = [ 'clone', 'copy' ],
+   optional = 1,
+   },
+snapname = get_standard_option('pve-snapshot-name', {
+description = Template Snapshot Name.,
+optional = 1

[pve-devel] [PATCH 08/13] forbid configuration update of a template

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm |2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 91cf569..63dbd33 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -873,6 +873,8 @@ __PACKAGE__-register_method({
die checksum missmatch (file change by other user?)\n
if $digest  $digest ne $conf-{digest};
 
+   die You can't change the configuration of a template if 
$conf-{template};
+
PVE::QemuServer::check_lock($conf) if !$skiplock;
 
if ($param-{memory} || defined($param-{balloon})) {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 12/13] vmstatus : return template if vm is a template

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5e1f319..1adff54 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1987,6 +1987,8 @@ sub vmstatus {
$d-{diskread} = 0;
$d-{diskwrite} = 0;
 
+$d-{template} = is_template($conf, undef, 1);
+
$res-{$vmid} = $d;
 }
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 13/13] add qemu_img_convert

2013-01-08 Thread Alexandre Derumier
also work with snapshot as source for qcow2,rbd,sheepdog.

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   51 +++
 1 file changed, 51 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1adff54..643080f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4521,4 +4521,55 @@ sub is_template {
 return undef;
 }
 
+sub qemu_img_convert {
+my ($src_volid, $dst_volid, $snapname) = @_;
+
+my $storecfg = PVE::Storage::config();
+my ($src_storeid, $src_volname) = 
PVE::Storage::parse_volume_id($src_volid, 1);
+my ($dst_storeid, $dst_volname) = 
PVE::Storage::parse_volume_id($dst_volid, 1);
+
+if ($src_storeid  $dst_storeid) {
+   my $src_scfg = PVE::Storage::storage_config($storecfg, $src_storeid);
+   my $dst_scfg = PVE::Storage::storage_config($storecfg, $dst_storeid);
+
+   my $src_format = qemu_img_format($src_scfg, $src_volname);
+   my $dst_format = qemu_img_format($dst_scfg, $dst_volname);
+
+   my $src_path = PVE::Storage::path($storecfg, $src_volid, $snapname);
+   my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
+
+   my $cmd = [];
+   push @$cmd, '/usr/bin/qemu-img', 'convert', '-t', 'writeback', '-p', 
'-C';
+   push @$cmd, '-s', $snapname if($snapname  $src_format eq qcow2);
+   push @$cmd, '-f', $src_format, '-O', $dst_format, $src_path, $dst_path;
+
+   my $parser = sub {
+   my $line = shift;
+   print $line.\n;
+   };
+
+   eval  { run_command($cmd, timeout = undef, errfunc = sub {}, outfunc 
= $parser); };
+   my $err = $@;
+   die copy failed: $err if $err;
+}
+}
+
+sub qemu_img_format {
+my ($scfg, $volname) = @_;
+
+if ($scfg-{path}  $volname =~ m/\.(raw|qcow2|qed|vmdk)$/){
+return $1;
+}
+elsif ($scfg-{type} eq 'nexenta' || $scfg-{type} eq 'iscsidirect'){
+return iscsi;
+}
+elsif ($scfg-{type} eq 'lvm' || $scfg-{type} eq 'iscsi'){
+return host_device;
+}
+#sheepdog,rbd,or other qemu block driver
+else{
+return $scfg-{type};
+}
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 11/13] api2: snapshot_list : return template flag

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm |2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c8cb9f3..f5a48ce 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2153,12 +2153,14 @@ __PACKAGE__-register_method({
};
$item-{parent} = $d-{parent} if $d-{parent};
$item-{snapstate} = $d-{snapstate} if $d-{snapstate};
+   $item-{template} = $d-{template} if $d-{template};
push @$res, $item;
}
 
my $running = PVE::QemuServer::check_running($vmid, 1) ? 1 : 0;
my $current = { name = 'current', digest = $conf-{digest}, running 
= $running };
$current-{parent} = $conf-{parent} if $conf-{parent};
+   $current-{template} = $conf-{template} if $conf-{template};
 
push @$res, $current;
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 01/31] storage: add volume_protect

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage.pm |   15 +++
 1 file changed, 15 insertions(+)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 6a274c6..f8cf5ad 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -198,6 +198,21 @@ sub volume_has_feature {
 }
 }
 
+sub volume_protect {
+my ($cfg, $volid, $snap, $read_only) = @_;
+
+my ($storeid, $volname) = parse_volume_id($volid, 1);
+if ($storeid) {
+my $scfg = storage_config($cfg, $storeid);
+my $plugin = PVE::Storage::Plugin-lookup($scfg-{type});
+return $plugin-volume_protect($scfg, $storeid, $volname, $snap, 
$read_only);
+} elsif ($volid =~ m|^(/.+)$|  -e $volid) {
+die protect a device is not possible;
+} else {
+die can't protect;
+}
+}
+
 sub get_image_dir {
 my ($cfg, $storeid, $vmid) = @_;
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 05/31] nexenta : add volume_protect

2013-01-08 Thread Alexandre Derumier
return undef, as nexenta have a implicit protection system when creatin clones

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/NexentaPlugin.pm |6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index 9622548..08accb1 100644
--- a/PVE/Storage/NexentaPlugin.pm
+++ b/PVE/Storage/NexentaPlugin.pm
@@ -394,4 +394,10 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_protect {
+my ($class, $scfg, $storeid, $volname, $snap, $read_only) = @_;
+#nexenta doesn't need to protect
+return undef;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 03/31] rbd: add volume_protect

2013-01-08 Thread Alexandre Derumier
We use the rbd protect command to protect a snapshot.
This is mandatory for clone a snapshot.

The rbd volume need to be at format V2

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/RBDPlugin.pm |   42 +-
 1 file changed, 37 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 948be0d..b08fe54 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -21,7 +21,6 @@ my $rbd_cmd = sub {
   '--auth_supported', $scfg-{authsupported}, $op];
 
 push @$cmd, @options if scalar(@options);
-
 return $cmd;
 };
 
@@ -41,6 +40,7 @@ my $rados_cmd = sub {
 return $cmd;
 };
 
+
 sub rbd_ls {
 my ($scfg, $storeid) = @_;
 
@@ -74,19 +74,32 @@ sub rbd_ls {
 }
 
 sub rbd_volume_info {
-my ($scfg, $storeid, $volname) = @_;
+my ($scfg, $storeid, $volname, $snap) = @_;
+
+my $cmd = undef;
+
+if($snap){
+   $cmd = $rbd_cmd($scfg, $storeid, 'info', $volname, '--snap', $snap);
+}else{
+   $cmd = $rbd_cmd($scfg, $storeid, 'info', $volname);
+}
+
 
-my $cmd = $rbd_cmd($scfg, $storeid, 'info', $volname);
 my $size = undef;
 my $parent = undef;
+my $format = undef;
+my $protected = undef;
 
 my $parser = sub {
my $line = shift;
-
if ($line =~ m/size (\d+) MB in (\d+) objects/) {
$size = $1;
} elsif ($line =~ m/parent:\s(\S+)\/(\S+)/) {
$parent = $2;
+   } elsif ($line =~ m/format:\s(\d+)/) {
+   $format = $1;
+   } elsif ($line =~ m/protected:\s(\S+)/) {
+   $protected = 1 if $1 eq True;
}
 };
 
@@ -94,7 +107,7 @@ sub rbd_volume_info {
 
 $size = $size*1024*1024 if $size;
 
-return ($size, $parent);
+return ($size, $parent, $format, $protected);
 }
 
 sub addslashes {
@@ -365,4 +378,23 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_protect {
+my ($class, $scfg, $storeid, $volname, $snap, $read_only) = @_;
+
+return if !$snap;
+
+my (undef, undef, $format, $protected) = rbd_volume_info($scfg, $storeid, 
$volname, , $snap);
+
+die rbd image must be at format V2 if $format ne 2;
+
+my $action = $read_only ? protect:unprotect;
+
+if (($protected  !$read_only) || (!$protected  $read_only)){
+my $cmd = $rbd_cmd($scfg, $storeid, 'snap', $action, $volname, 
'--snap', $snap);
+run_command($cmd, errmsg = rbd protect $volname snap $snap' error, 
errfunc = sub {});
+}
+}
+
+
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 07/31] iscsidirect : add volume_protect

2013-01-08 Thread Alexandre Derumier
we can't protect an iscsi volume

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/ISCSIDirectPlugin.pm |5 +
 1 file changed, 5 insertions(+)

diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index b648fd5..95faa8a 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -214,4 +214,9 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_protect {
+my ($class, $scfg, $storeid, $volname, $snap, $read_only) = @_;
+die you can't protect an iscsi device;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 02/31] Plugin : add volume_protect

2013-01-08 Thread Alexandre Derumier
(and also fix backing file regex parsing)

for files, we protect  the volume file with chattr.
So we can only read it, but can't delete or move it.

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/Plugin.pm |   20 +---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 25e012c..6f1cac9 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -465,11 +465,10 @@ sub file_size_info {
 eval {
run_command($cmd, timeout = $timeout, outfunc = sub {
my $line = shift;
-
if ($line =~ m/^file format:\s+(\S+)\s*$/) {
$format = $1;
-   } elsif ($line =~ m/^backing file:\s(\S+)\s/) {
-   $parent = $1;
+   } elsif ($line =~ m!^backing file:\s(\S+)/(\d+)/(\S+)$!) {
+   $parent = $2./.$3;
} elsif ($line =~ m/^virtual size:\s\S+\s+\((\d+)\s+bytes\)$/) {
$size = int($1);
} elsif ($line =~ m/^disk size:\s+(\d+(.\d+)?)([KMGT])\s*$/) {
@@ -576,6 +575,21 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_protect {
+my ($class, $scfg, $storeid, $volname, $snap, $read_only) = @_;
+
+return if $snap;
+
+my $path = $class-path($scfg, $volname);
+
+my $action = $read_only ? +i:-i;
+
+my $cmd = ['/usr/bin/chattr', $action, $path];
+run_command($cmd);
+
+return undef;
+}
+
 sub list_images {
 my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 09/31] plugin : add find_free_volname

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/Plugin.pm |   45 -
 1 file changed, 28 insertions(+), 17 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 6f1cac9..9bdcf0d 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -400,29 +400,16 @@ sub path {
 sub alloc_image {
 my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
-my $imagedir = $class-get_subdir($scfg, 'images');
-$imagedir .= /$vmid;
-
-mkpath $imagedir;
-
-if (!$name) {
-   for (my $i = 1; $i  100; $i++) {
-   my @gr = $imagedir/vm-$vmid-disk-$i.*;
-   if (!scalar(@gr)) {
-   $name = vm-$vmid-disk-$i.$fmt;
-   last;
-   }
-   }
-}
-
-die unable to allocate an image name for VM $vmid in storage '$storeid'\n
-   if !$name;
+$name = $class-find_free_volname($storeid, $scfg, $vmid, $fmt);
 
 my (undef, $tmpfmt) = parse_name_dir($name);
 
 die illegal name '$name' - wrong extension for format ('$tmpfmt != 
'$fmt')\n
if $tmpfmt ne $fmt;
 
+my $imagedir = $class-get_subdir($scfg, 'images');
+$imagedir .= /$vmid;
+
 my $path = $imagedir/$name;
 
 die disk image '$path' already exists\n if -e $path;
@@ -438,6 +425,30 @@ sub alloc_image {
 return $vmid/$name;
 }
 
+sub find_free_volname {
+my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
+
+my $name = undef;
+
+my $imagedir = $class-get_subdir($scfg, 'images');
+$imagedir .= /$vmid;
+
+mkpath $imagedir;
+
+for (my $i = 1; $i  100; $i++) {
+   my @gr = $imagedir/vm-$vmid-disk-$i.*;
+   if (!scalar(@gr)) {
+$name = vm-$vmid-disk-$i.$fmt;
+last;
+   }
+}
+
+die unable to allocate an image name for VM $vmid in storage '$storeid'\n
+   if !$name;
+
+return $name;
+}
+
 sub free_image {
 my ($class, $storeid, $scfg, $volname) = @_;
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 10/31] rbd : add find_free_volname

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/RBDPlugin.pm |   31 +++
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index b08fe54..003585a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -208,24 +208,31 @@ sub alloc_image {
 die illegal name '$name' - sould be 'vm-$vmid-*'\n
if  $name  $name !~ m/^vm-$vmid-/;
 
-if (!$name) {
-   my $rdb = rbd_ls($scfg, $storeid);
-
-   for (my $i = 1; $i  100; $i++) {
-   my $tn = vm-$vmid-disk-$i;
-   if (!defined ($rdb-{$scfg-{pool}}-{$tn})) {
-   $name = $tn;
-   last;
-   }
+$name = $class-find_free_volname($storeid, $scfg, $vmid, $fmt);
+
+my $cmd = $rbd_cmd($scfg, $storeid, 'create', '--format' , 2, '--size', 
($size/1024), $name);
+run_command($cmd, errmsg = rbd create $name' error, errfunc = sub {});
+
+return $name;
+}
+
+sub find_free_volname {
+my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
+
+my $name = undef;
+my $rdb = rbd_ls($scfg, $storeid);
+
+for (my $i = 1; $i  100; $i++) {
+   my $tn = vm-$vmid-disk-$i;
+   if (!defined ($rdb-{$scfg-{pool}}-{$tn})) {
+   $name = $tn;
+   last;
}
 }
 
 die unable to allocate an image name for VM $vmid in storage '$storeid'\n
if !$name;
 
-my $cmd = $rbd_cmd($scfg, $storeid, 'create', '--format' , 2, '--size', 
($size/1024), $name);
-run_command($cmd, errmsg = rbd create $name' error, errfunc = sub {});
-
 return $name;
 }
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 11/31] sheepdog : add find_free_volname

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/SheepdogPlugin.pm |   34 +-
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index be1549d..132b72c 100644
--- a/PVE/Storage/SheepdogPlugin.pm
+++ b/PVE/Storage/SheepdogPlugin.pm
@@ -151,28 +151,36 @@ sub alloc_image {
 die illegal name '$name' - sould be 'vm-$vmid-*'\n
if  $name  $name !~ m/^vm-$vmid-/;
 
-if (!$name) {
-   my $sheepdog = sheepdog_ls($scfg, $storeid);
-
-   for (my $i = 1; $i  100; $i++) {
-   my $tn = vm-$vmid-disk-$i;
-   if (!defined ($sheepdog-{$storeid}-{$tn})) {
-   $name = $tn;
-   last;
-   }
+$name = $class-find_free_volname($storeid, $scfg, $vmid);
+
+my $cmd = $collie_cmd($scfg, 'vdi', 'create', $name , ${size}KB);
+
+run_command($cmd, errmsg = sheepdog create $name' error);
+
+return $name;
+}
+
+sub find_free_volname {
+my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
+
+my $name = undef;
+my $sheepdog = sheepdog_ls($scfg, $storeid);
+
+for (my $i = 1; $i  100; $i++) {
+   my $tn = vm-$vmid-disk-$i;
+   if (!defined ($sheepdog-{$storeid}-{$tn})) {
+   $name = $tn;
+   last;
}
 }
 
 die unable to allocate an image name for VM $vmid in storage '$storeid'\n
if !$name;
 
-my $cmd = $collie_cmd($scfg, 'vdi', 'create', $name , ${size}KB);
-
-run_command($cmd, errmsg = sheepdog create $name' error);
-
 return $name;
 }
 
+
 sub free_image {
 my ($class, $storeid, $scfg, $volname) = @_;
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 12/31] nexenta: add find_free_volname

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/NexentaPlugin.pm |   30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index 08accb1..0c9c6f5 100644
--- a/PVE/Storage/NexentaPlugin.pm
+++ b/PVE/Storage/NexentaPlugin.pm
@@ -214,27 +214,35 @@ sub alloc_image {
 die illegal name '$name' - sould be 'vm-$vmid-*'\n
if $name  $name !~ m/^vm-$vmid-/;
 
+$name = $class-find_free_volname($storeid, $scfg, $vmid, $fmt);
+
+nexenta_create_zvol($scfg, $name, $size);
+nexenta_create_lu($scfg, $name);
+nexenta_add_lun_mapping_entry($scfg, $name);
+
+return $name;
+}
+
+sub find_free_volname {
+my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
+
+my $name = undef;
 my $nexentapool = $scfg-{'pool'};
 
-if (!$name) {
-   my $volumes = nexenta_list_zvol($scfg);
+my $volumes = nexenta_list_zvol($scfg);
die unable de get zvol list if !$volumes;
 
-   for (my $i = 1; $i  100; $i++) {
-   my $tn = vm-$vmid-disk-$i;
-   if (!defined ($volumes-{$nexentapool}-{$tn})) {
-   $name = $tn;
-   last;
-   }
+for (my $i = 1; $i  100; $i++) {
+   my $tn = vm-$vmid-disk-$i;
+if (!defined ($volumes-{$nexentapool}-{$tn})) {
+   $name = $tn;
+   last;
}
 }
 
 die unable to allocate an image name for VM $vmid in storage '$storeid'\n
if !$name;
 
-nexenta_create_zvol($scfg, $name, $size);
-nexenta_create_lu($scfg, $name);
-nexenta_add_lun_mapping_entry($scfg, $name);
 
 return $name;
 }
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 16/31] sheepdog : add volume_clone

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/SheepdogPlugin.pm |   10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index 132b72c..69918b9 100644
--- a/PVE/Storage/SheepdogPlugin.pm
+++ b/PVE/Storage/SheepdogPlugin.pm
@@ -353,4 +353,14 @@ sub volume_protect {
 return undef;
 }
 
+sub volume_clone {
+my ($class, $scfg, $storeid, $volname, $snap, $vmid) = @_;
+
+my $volnamedst = $class-find_free_volname($storeid, $scfg, $vmid);
+
+my $cmd = $collie_cmd($scfg, 'vdi', 'clone', '-s', $snap, $volname, 
$volnamedst);
+run_command($cmd, errmsg = sheepdog clone $volname' error);
+
+return $volnamedst;
+}
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 15/31] rbd: add volume_clone

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/RBDPlugin.pm |   12 
 1 file changed, 12 insertions(+)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 003585a..f247376 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -402,6 +402,18 @@ sub volume_protect {
 }
 }
 
+sub volume_clone {
+my ($class, $scfg, $storeid, $volname, $snap, $vmid) = @_;
+
+my $volnamedst = $class-find_free_volname($storeid, $scfg, $vmid);
+
+my $cmd = $rbd_cmd($scfg, $storeid, 'clone', $volname, '--snap', $snap, 
$volnamedst);
+run_command($cmd, errmsg = rbd clone $volname' error, errfunc = sub 
{});
+
+return $volnamedst;
+}
+
+
 
 
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 17/31] lvm: add volume_clone

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/LVMPlugin.pm |7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 21d3ec9..5a0962c 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -456,4 +456,11 @@ sub volume_protect {
 
 }
 
+sub volume_clone {
+my ($class, $scfg, $storeid, $volname, $snap, $vmid) = @_;
+
+die lvm cloning  is not implemented;
+
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 18/31] iscsidirect : add volume_clone

2013-01-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/Storage/ISCSIDirectPlugin.pm |5 +
 1 file changed, 5 insertions(+)

diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index 95faa8a..41b8786 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -219,4 +219,9 @@ sub volume_protect {
 die you can't protect an iscsi device;
 }
 
+sub volume_clone {
+my ($class, $scfg, $storeid, $volname, $snap, $vmid) = @_;
+die volume cloning is not possible on iscsi device;
+}
+
 1;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


<    1   2   3   4   5   6   7   8   9   10   >