[pve-devel] Bug in the PVE GUI report of disk space

2017-02-04 Thread Cesar Peschiera

Hi to all.

I found a bug in the PVE GUI report of disk space, please go to this site 
(and if is possible, answer me):

https://forum.proxmox.com/threads/bug-in-pve-gui-report-of-space-and-a-more-question.32661/

BR
Cesar 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Support for external fencing devices and manual fence

2016-11-20 Thread Cesar Peschiera

Hi developer team.

According to the roadmap for Proxmox VE 4.4,  and the announcement: "support
for external fencing devices"...

I would like to ask you if the manual fence wil be supported, because in
this web page...
https://linux.die.net/man/8/fence_ack_manual

... I read that this option is util and really necessary in certain cases.

At least for me, always to have this option available will give me more
peace of mind, bacause it is better to have it and not need it, than not
have it and need it.

Best regards
Cesar Peschiera

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] training question about HA && failback

2016-06-08 Thread Cesar Peschiera

Hi Alexandre

Restricted = The VM only can run in the permitted nodes (useful when some
nodes don't have access to the shared storage).
Nofailback = When a PVE node returns to life, the VMs that before was
running in this node not return again.

And answering at your question, for get the best strategy, i see that is
necessary two additionals directive:

"Priority" = That establish the priority of nodes for run a specific VM.
"Ordered" = Order of preference for run a VM.

For example (with 7 lines of text):

 
   
   
   
 


Of this mode, and without put the directive "nofailback", the VM will return
to "node-pve-1", or "node-pve-2" when them are alive, and the preference is
determinded by the directive "priority".
Tested in PVE 3.x with fence technology.

I hope I have been helpful.

Best regards
Cesar

- Original Message - 
From: "Alexandre DERUMIER" 

To: "pve-devel" 
Sent: Wednesday, June 08, 2016 11:06 AM
Subject: [pve-devel] training question about HA && failback



Hi,

my students of this week traning, have a question about HA and failback:


We have 3 host :

kvm1
kvm2
kvm3


an HA group "kvm23" with kvm2 && kvm3


We have a vm , with HA enable in group kvm23,with "nofailback" not
enabled, "restricted" not enabled and vm is running on kvm3.


kvm3 crash.


the vm is restarted by HA to kvm2


kvm3 is back online.


vm does not failback to kvm3 ?

The documentation said that it should failback to prefered node. But what
is the prefered node ? any member of the group ?

Students expected that vm fallback to last node before the crash.


Regards,

Alexandre
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Change the form in that the DRBD resource are created in PVE GUI

2016-04-16 Thread Cesar Peschiera

Oh, then I want to apologize for my misunderstanding in this topic.

By other hand, i am doing some questions in the PVE forum, so that later i 
can do some intensive testing, so that if you or anybody can help me 
dispelling such doubts, i will be very grateful.


This is the link of my questions in the PVE forum:
https://forum.proxmox.com/threads/kernel-panic-with-proxmox-4-1-13-drbd-9-0-0.26194/page-2#post-135506

Best regards
Cesar


- Original Message - 
From: "Dietmar Maurer" <diet...@proxmox.com>
To: "Cesar Peschiera" <br...@click.com.py>; "PVE development discussion" 
<pve-devel@pve.proxmox.com>

Sent: Saturday, April 16, 2016 12:12 PM
Subject: Re: [pve-devel] Change the form in that the DRBD resource are 
created in PVE GUI







On April 16, 2016 at 1:03 AM Cesar Peschiera <br...@click.com.py> wrote:


Oh, i'm sorry.

I will try of saying it more simply ...

In few words, i would like that "each DRBD resource created" has the same
size of the virtual disk and not the size of a big hard disk or a big
partition.


Sorry, I don't get it. Each created resource has the size you specify for
the virtual disk - that is how it already works.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Change the form in that the DRBD resource are created in PVE GUI

2016-04-15 Thread Cesar Peschiera

Oh, i'm sorry.

I will try of saying it more simply ...

In few words, i would like that "each DRBD resource created" has the same
size of the virtual disk and not the size of a big hard disk or a big
partition.

Of this mode, when a verification of DRBD resources is performed
(/sbin/drbdadm verify all), will be more short the time of wait for arrive
his completion.

The importance of get a short time of wait for arrive his completion is due
to these reasons:
- If a "DRBD resource" has the size of a big hard disk, the verification of
DRBD Resource will finish in a long time (and perhaps more of a day).
- So, if we have other scheduled tasks in the system that also need time for
terminate (specified in my previous email), and it is not desirable that
overlapping (for example, with the vzdump backup and other scheduled tasks),
then, we will need that each DRBD resource be the more little possible.

I hope I explained clearly.

Best regards
Cesar


- Original Message - 
From: "Dietmar Maurer" <diet...@proxmox.com>

To: "Cesar Peschiera" <br...@click.com.py>; "PVE development discussion"
<pve-devel@pve.proxmox.com>
Sent: Friday, April 15, 2016 12:20 AM
Subject: Re: [pve-devel] Change the form in that the DRBD resource are
created in PVE GUI



Question:
Will be possible to make such change?


Honestly, I don't understand what you request. We use
drbdmanage to manage drbd9 volumes.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Change the form in that the DRBD resource are created in PVE GUI

2016-04-14 Thread Cesar Peschiera

Hi PVE developers team

Please, let me to do a suggestion (this topic is a bit complicated):

I suggest that in PVE 4.x, in the use of the PVE GUI, LVM is on top of DRBD
(instead of DRBD in top of LVM), of this mode, each LV (logical volume) can
to have the same size of the virtual hard disk.

Why is good idea this concept?:

In the current state, each DRBD resource have all size of a partition or a
hard disk. Then, as in the datacenter is need run several scheduled tasks,
as:

a) The backups of our VMs, that minimally should be done every day (until
now without options of incremental or differential).
b) The verification of all DRBD resources (at least once by week if the
information is critical).
c) The hardware verification of blocks of disk in the RAIDs created (at
least once by week if the information is critical, besides, the RAIDs
controllers as LSI , Adaptec, etc. has this options, and his manufacturers
recommend their use periodically).

And as all these tasks require much time, i think that the PVE GUI for the
creation of DRBD resources must be modified, ie, that each DRBD resource has
the same size of the virtual disk, of this mode, the verification of all
DRBD resources will take much less time to complete.

Disadvantage of this request:

If LVM is on top of DRBD, will be impossible to do a resize of virtual disk
in hot because the LV (logical volume. ie, the virtual disk) must be
previously unmounted, as also, obviously,  previously the virtual machine
should be turned off, only of this mode the DRBD resource can be resized.

Talking within of this context and using DRBD 8.x, I have tested many times 
all these tasks in production environments with PVE 3.x., and always 
successfully.


And please, also be aware that the verification of DRBD resources must be
made periodically (in shorts times of wait between a and other
verification), while that the need of do a resize of a virtual disk is only
sporadic.

In summary:

If "drbdmanage" can't change the size of a DRBD resource in hot, i think
that the PVE GUI may do it, but also not in hot, and will need to have
previously turned off the VM and unmounted the LV (logical volume, ie, the
virtual disk, and maybe in both PVE nodes) for do the resize of DRBD
resource, and afterward it may resize the LV (accordingly, the size of the
virtual disk).

Question:
Will be possible to make such change?

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] PVE 4 and CPU errors

2015-10-02 Thread Cesar Peschiera

Hi Alexandre

Since much time ago, i did want to have a list of processor names (as
Nehalem, Sandy bridge, Haswell etc.) and his respective models (X7560,
etc.).

Is there a web link internet with this information?, or
Can you or anyone add this information in the wiki of PVE?

I think that such information will be very util for choose the better
processor when we create a VM and we want enable live migration with old and
differents processors and not die in the intent.

Best regards
Cesar

- Original Message - 
From: "Alexandre DERUMIER" 

To: "pve-devel" 
Sent: Thursday, October 01, 2015 9:46 PM
Subject: Re: [pve-devel] PVE 4 and CPU errors



64 x Intel(R) Xeon(R) CPU X7560 @ 2.27GHz


It's a Nehalem processor.

You can't emulate a sandybridge or haswell with this.


Note that with proxmox 3.4 it was possible, but bad thing can happen
(crashs,slowdown,...).
In proxmox 4, we have added the qemu enforce option, which check if the
vcpu model is compatible
with physical cpu.


- Mail original -
De: "Gilberto Nunes" 
À: "pve-devel" 
Envoyé: Jeudi 1 Octobre 2015 20:48:24
Objet: Re: [pve-devel] PVE 4 and CPU errors

64 x Intel(R) Xeon(R) CPU X7560 @ 2.27GHz


2015-10-01 15:29 GMT-03:00 Alexandre DERUMIER < aderum...@odiso.com > :


What is your physical cpu model ?


- Mail original - 
De: "Gilberto Nunes" < gilberto.nune...@gmail.com >

À: "pve-devel" < pve-devel@pve.proxmox.com >
Envoyé: Jeudi 1 Octobre 2015 19:34:08
Objet: [pve-devel] PVE 4 and CPU errors

Hi guys

Sometimes, when I create a VM, and depend on what CPU I choose, ( p. ex.
SandyBirgdge os HosWell), I get the error below:

Running as unit 112.scope.
warning: host doesn't support requested feature:
CPUID.01H:ECX.pclmulqdq|pclmuldq [bit 1]
warning: host doesn't support requested feature: CPUID.01H:ECX.aes [bit
25]
warning: host doesn't support requested feature: CPUID.01H:ECX.xsave [bit
26]
warning: host doesn't support requested feature: CPUID.01H:ECX.avx [bit
28]
warning: host doesn't support requested feature: CPUID.0DH:EAX.xsaveopt
[bit 0]
kvm: Host doesn't support requested features
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice
qemu --unit 112 -p 'CPUShares=1000' /usr/bin/kvm -id 112 -chardev
'socket,id=qmp,path=/var/run/qemu-server/112.qmp,server,nowait' -mon
'chardev=qmp,mode=control' -vnc
unix:/var/run/qemu-server/112.vnc,x509,password -pidfile
/var/run/qemu-server/112.pid -daemonize -smbios
'type=1,uuid=07fd45ca-f300-4338-93a9-0c89b4750fab' -name
Win7-Pro-32bits -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot
'menu=on,strict=on,reboot-timeout=1000' -vga qxl -no-hpet -cpu
'SandyBridge,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time,enforce'
 -m 2048 -k pt-br -device
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device
'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -spice
'tls-port=61002,addr=localhost,tls-ciphers=DES-CBC3-SHA,seamless-migration=on'
 -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev
'spicevmc,id=vdagent,name=vdagent' -device
'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -device
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi
'initiator-name=iqn.1993-08.org.debian:01:ab655a2a85b4' -drive
'file=/mnt/pve/STG-NFS/images/112/vm-112-disk-1.raw,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on'
 -device
'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100'
 -drive
'file=/var/lib/vz/template/iso/Windows7-All.iso,if=none,id=drive-ide2,media=cdrom,aio=threads'
 -device
'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev
'type=tap,id=net0,ifname=tap112i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'
 -device
'virtio-net-pci,mac=02:0B:AE:90:F0:2C,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
 -rtc 'driftfix=slew,base=localtime' -global
'kvm-pit.lost_tick_policy=discard'' failed: exit code 1


What is that means

Thanks

--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel






--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




Re: [pve-devel] PVE 4 and CPU errors

2015-10-02 Thread Cesar Peschiera

So i only need testing it before putting it in a production enviroment.
Oh thats good, it is wonderful !!! congratulations for the great work
 :-)

And please, let me to do two questions more:
1) If a server has Intel, and other has AMD, and i want to do live
migration, what will be the best practice of PVE configuration?
2) This new patch, will be it available in PVE 3.x?

- Original Message - 
From: "Alexandre DERUMIER" <aderum...@odiso.com>

To: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Friday, October 02, 2015 5:03 AM
Subject: Re: [pve-devel] PVE 4 and CPU errors



I think that such information will be very util for choose the better
processor when we create a VM and we want enable live migration with old
and
differents processors and not die in the intent.


The good news is the now,
it'll not die, because the target server will refuse to launch qemu if
some
vcpus  features are not supported :)


- Mail original -
De: "Cesar Peschiera" <br...@click.com.py>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 2 Octobre 2015 06:45:45
Objet: Re: [pve-devel] PVE 4 and CPU errors

Hi Alexandre

Since much time ago, i did want to have a list of processor names (as
Nehalem, Sandy bridge, Haswell etc.) and his respective models (X7560,
etc.).

Is there a web link internet with this information?, or
Can you or anyone add this information in the wiki of PVE?

I think that such information will be very util for choose the better
processor when we create a VM and we want enable live migration with old
and
differents processors and not die in the intent.

Best regards
Cesar

- Original Message - 
From: "Alexandre DERUMIER" <aderum...@odiso.com>

To: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Thursday, October 01, 2015 9:46 PM
Subject: Re: [pve-devel] PVE 4 and CPU errors



64 x Intel(R) Xeon(R) CPU X7560 @ 2.27GHz


It's a Nehalem processor.

You can't emulate a sandybridge or haswell with this.


Note that with proxmox 3.4 it was possible, but bad thing can happen
(crashs,slowdown,...).
In proxmox 4, we have added the qemu enforce option, which check if the
vcpu model is compatible
with physical cpu.


- Mail original - 
De: "Gilberto Nunes" <gilberto.nune...@gmail.com>

À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 1 Octobre 2015 20:48:24
Objet: Re: [pve-devel] PVE 4 and CPU errors

64 x Intel(R) Xeon(R) CPU X7560 @ 2.27GHz


2015-10-01 15:29 GMT-03:00 Alexandre DERUMIER < aderum...@odiso.com > :


What is your physical cpu model ?


- Mail original - 
De: "Gilberto Nunes" < gilberto.nune...@gmail.com >

À: "pve-devel" < pve-devel@pve.proxmox.com >
Envoyé: Jeudi 1 Octobre 2015 19:34:08
Objet: [pve-devel] PVE 4 and CPU errors

Hi guys

Sometimes, when I create a VM, and depend on what CPU I choose, ( p. ex.
SandyBirgdge os HosWell), I get the error below:

Running as unit 112.scope.
warning: host doesn't support requested feature:
CPUID.01H:ECX.pclmulqdq|pclmuldq [bit 1]
warning: host doesn't support requested feature: CPUID.01H:ECX.aes [bit
25]
warning: host doesn't support requested feature: CPUID.01H:ECX.xsave [bit
26]
warning: host doesn't support requested feature: CPUID.01H:ECX.avx [bit
28]
warning: host doesn't support requested feature: CPUID.0DH:EAX.xsaveopt
[bit 0]
kvm: Host doesn't support requested features
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice
qemu --unit 112 -p 'CPUShares=1000' /usr/bin/kvm -id 112 -chardev
'socket,id=qmp,path=/var/run/qemu-server/112.qmp,server,nowait' -mon
'chardev=qmp,mode=control' -vnc
unix:/var/run/qemu-server/112.vnc,x509,password -pidfile
/var/run/qemu-server/112.pid -daemonize -smbios
'type=1,uuid=07fd45ca-f300-4338-93a9-0c89b4750fab' -name
Win7-Pro-32bits -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot
'menu=on,strict=on,reboot-timeout=1000' -vga qxl -no-hpet -cpu
'SandyBridge,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time,enforce'
-m 2048 -k pt-br -device
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device
'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -spice
'tls-port=61002,addr=localhost,tls-ciphers=DES-CBC3-SHA,seamless-migration=on'
-device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev
'spicevmc,id=vdagent,name=vdagent' -device
'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -device
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi
'initiator-name=iqn.1993-08.org.debian:01:ab655a2a85b4' -drive
'file=/mnt/pve/STG-NFS/images/112/vm-112-disk-1.raw,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on'
-device
'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100'
-drive
'file=/var/lib/vz/template/iso/Win

Re: [pve-devel] PVE 4 and CPU errors

2015-10-02 Thread Cesar Peschiera

I have kernel 3.10 installed on PVE 3.x

Anyway, many thanks.

- Original Message - 
From: "Alexandre DERUMIER" <aderum...@odiso.com>

To: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Friday, October 02, 2015 5:39 AM
Subject: Re: [pve-devel] PVE 4 and CPU errors



1) If a server has Intel, and other has AMD, and i want to do live
migration, what will be the best practice of PVE configuration?


cpumodel = kvm64

2) This new patch, will be it available in PVE 3.x?

I don't think . (this need kernel 3.10 and it's not the default on PVE
3.x)



- Mail original -
De: "Cesar Peschiera" <br...@click.com.py>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 2 Octobre 2015 11:34:38
Objet: Re: [pve-devel] PVE 4 and CPU errors

So i only need testing it before putting it in a production enviroment.
Oh thats good, it is wonderful !!! congratulations for the great work
 :-)

And please, let me to do two questions more:
1) If a server has Intel, and other has AMD, and i want to do live
migration, what will be the best practice of PVE configuration?
2) This new patch, will be it available in PVE 3.x?

- Original Message - 
From: "Alexandre DERUMIER" <aderum...@odiso.com>

To: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Friday, October 02, 2015 5:03 AM
Subject: Re: [pve-devel] PVE 4 and CPU errors



I think that such information will be very util for choose the better
processor when we create a VM and we want enable live migration with old
and
differents processors and not die in the intent.


The good news is the now,
it'll not die, because the target server will refuse to launch qemu if
some
vcpus features are not supported :)


- Mail original - 
De: "Cesar Peschiera" <br...@click.com.py>

À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 2 Octobre 2015 06:45:45
Objet: Re: [pve-devel] PVE 4 and CPU errors

Hi Alexandre

Since much time ago, i did want to have a list of processor names (as
Nehalem, Sandy bridge, Haswell etc.) and his respective models (X7560,
etc.).

Is there a web link internet with this information?, or
Can you or anyone add this information in the wiki of PVE?

I think that such information will be very util for choose the better
processor when we create a VM and we want enable live migration with old
and
differents processors and not die in the intent.

Best regards
Cesar

- Original Message - 
From: "Alexandre DERUMIER" <aderum...@odiso.com>

To: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Thursday, October 01, 2015 9:46 PM
Subject: Re: [pve-devel] PVE 4 and CPU errors



64 x Intel(R) Xeon(R) CPU X7560 @ 2.27GHz


It's a Nehalem processor.

You can't emulate a sandybridge or haswell with this.


Note that with proxmox 3.4 it was possible, but bad thing can happen
(crashs,slowdown,...).
In proxmox 4, we have added the qemu enforce option, which check if the
vcpu model is compatible
with physical cpu.


- Mail original - 
De: "Gilberto Nunes" <gilberto.nune...@gmail.com>

À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 1 Octobre 2015 20:48:24
Objet: Re: [pve-devel] PVE 4 and CPU errors

64 x Intel(R) Xeon(R) CPU X7560 @ 2.27GHz


2015-10-01 15:29 GMT-03:00 Alexandre DERUMIER < aderum...@odiso.com > :


What is your physical cpu model ?


- Mail original - 
De: "Gilberto Nunes" < gilberto.nune...@gmail.com >

À: "pve-devel" < pve-devel@pve.proxmox.com >
Envoyé: Jeudi 1 Octobre 2015 19:34:08
Objet: [pve-devel] PVE 4 and CPU errors

Hi guys

Sometimes, when I create a VM, and depend on what CPU I choose, ( p. ex.
SandyBirgdge os HosWell), I get the error below:

Running as unit 112.scope.
warning: host doesn't support requested feature:
CPUID.01H:ECX.pclmulqdq|pclmuldq [bit 1]
warning: host doesn't support requested feature: CPUID.01H:ECX.aes [bit
25]
warning: host doesn't support requested feature: CPUID.01H:ECX.xsave
[bit
26]
warning: host doesn't support requested feature: CPUID.01H:ECX.avx [bit
28]
warning: host doesn't support requested feature: CPUID.0DH:EAX.xsaveopt
[bit 0]
kvm: Host doesn't support requested features
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice
qemu --unit 112 -p 'CPUShares=1000' /usr/bin/kvm -id 112 -chardev
'socket,id=qmp,path=/var/run/qemu-server/112.qmp,server,nowait' -mon
'chardev=qmp,mode=control' -vnc
unix:/var/run/qemu-server/112.vnc,x509,password -pidfile
/var/run/qemu-server/112.pid -daemonize -smbios
'type=1,uuid=07fd45ca-f300-4338-93a9-0c89b4750fab' -name
Win7-Pro-32bits -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot
'menu=on,strict=on,reboot-timeout=1000' -vga qxl -no-hpet -cpu
'SandyBridge,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time,enforce'
-m 2048 -k

Re: [pve-devel] PVE 4 and CPU errors

2015-10-02 Thread Cesar Peschiera

Martin, many thanks for the info !!! ... :-)
(will be very useful)

A good way also is by here:
http://ark.intel.com/#@ProductsByCodeName

But, in all these links, and sublinks, i don't find the complete list of the 
flags of CPU ... :-(


Best regards
Cesar


- Original Message - 
From: "Martin Maurer" <mar...@proxmox.com>

To: <pve-devel@pve.proxmox.com>
Sent: Friday, October 02, 2015 3:59 AM
Subject: Re: [pve-devel] PVE 4 and CPU errors



Hi,

Best place for Intel CPU information is the Intel webpage:
http://ark.intel.com

but also wikipedia lists a lot of info, e.g:

https://en.wikipedia.org/wiki/Xeon

best,
Martin

On 02.10.2015 06:45, Cesar Peschiera wrote:

Hi Alexandre

Since much time ago, i did want to have a list of processor names (as
Nehalem, Sandy bridge, Haswell etc.) and his respective models (X7560,
etc.).

Is there a web link internet with this information?, or




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 pve-manager] Allow email adresses with a toplevel domain of up to 63 characters

2015-10-01 Thread Cesar Peschiera

Many thanks for your reply Emmanuel

But i think that such options should be well known by all. Who has not had a 
problem of SPAM?, or Who has not tried to strengthen security in his mail 
server to avoid sending Spam?. And it is for the danger reasons of SPAM that 
i don't want configure my email server with relays when the mail server is 
out of the LAN.


Moreover, if the PVE developers don't think that such options are good, if 
anybody can tell me how configure my PVE host manually for that can send 
emails with auth and without a relay (and if is possible, with ssl/tls and 
port number to use), please give me a web link for that i can do a practice, 
i will be very thankful !!!.


Anyway, many thanks for your recommendations.

Best regards
Cesar

- Original Message - 
From: "Emmanuel Kasper" <e.kas...@proxmox.com>

To: "Cesar Peschiera" <br...@click.com.py>; <pve-devel@pve.proxmox.com>
Sent: Wednesday, September 30, 2015 4:17 AM
Subject: Re: [pve-devel] [PATCH v2 pve-manager] Allow email adresses with a 
toplevel domain of up to 63 characters




Hi

Please, let me to do a question:

Several mail servers require auth for accept a message (for after send
it to
addressee), so my question is if is possible add this option in PVE GUI.
(and if is possible, also choose a port number, and a SSL/TLS connection)

Notes:
1)All programs that i know has these options for choose, and i think that
will be fantastic have these options enabled in PVE.

2) When i administer the mail server, and the server is in the LAN, i
configure a relay in the mail server for each PVE node, but such setup
isn't
the ideal, and when the mail Server is out of the LAN (ie, in the WAN),
such
setup is not recommended (for avoid the problems of SPAM that may have 
the

computers into the LAN), so i can not configure the send of mails in the
PVE
nodes that are into the LAN... :-(

Best regards
Cesar



Hi Cesare !

You ask a goodf question, but unfortunately we cannot add a checkbox in
the Web GUI for every bit and blob option that a SMTP server or Qemu 
knows.

If we do this, we satisfy the power users, at the expense of getting 80%
of the users confused by having even more options for simple tasks

It is much better to propose sane defaults in the UI which cover most of
the use case, and leave the extra bits to manual editing of the
configuration files.
postfix is a depency of pve-manager, so all proxmox hosts have it
installed. So instead of fiddling with SSL expiration date in
pve-manager, it is much better to delegate the handling of this to 
postfix.


Now going back to the situation of having the SMP relayhost outside from
the LAN, I see at least two ways of setting that up:

* you can configure the relayhost, to only relay mail coming from your
LAN external IP adress

* as an extra step you can also configure postfix on the proxmox host to
send email via SSL and Authentification to your relayhost, there is a
good howto on that here, using Google SMTP servers as the relayhost but
it should work with any mail server.
http://mhawthorne.net/posts/postfix-configuring-gmail-as-relay.html

Best Regards
Emmanuel










___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-29 Thread Cesar Peschiera

@Alexandre DERUMIER:

I'm not sure, because each vm have control of his own tcp congestion.

(if it's bridged, the host shouldn't do nothing)


Ohh, ok, it isn't a great problem (i use KVM VM).
Many thanks for the clarification.

And please let me to do a question:
Is the same for conatiners (LCX or OpenVZ)?

@Michael Rasmussen:

From what I have read it seems that this is mostly an issue if one side
of the connection has mush more bandwidth that the other side. All been
equal in a proxmox cluster all connections should more or less have the
same capabilities.


Many thanks for the prompt reply,
and please let me to do a question:

I have a PVE cluster, only two servers of them has NICs of 10 Gb/s for:
- LAN communication
- PVE Cluster
- LAN for VMs
(other NICs are used for DRBD and backups)

Also configurated in LACP mode (2 ports in each server)

The rest of servers has NICs of 1 Gb/s. (LACP mode, 2 ports in each server) 
for:

- LAN communication
- PVE Cluster
- LAN for VMs
(other NICs are used for DRBD and backups)

The question:
Do i have this problem? 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Cesar Peschiera

This is not a critical bug it merely affects network performance.


Ok, but i guess that if we have several VMs into a Server, the problem be 
multiplied



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Cesar Peschiera

The error affect almost all Linux distros

See the notice in this link:
http://bitsup.blogspot.com/2015/09/thanks-google-tcp-team-for-open-source.html

See the Google patch here:
https://github.com/torvalds/linux/commit/30927520dbae297182990bb21d08762bcc35ce1d

My Question:
What will be the policy of PVE about of this?

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding live migration options ? (xbzlre, compression, ...)

2015-09-27 Thread Cesar Peschiera
- Original Message - 
From: "Dietmar Maurer" <diet...@proxmox.com>
To: "Alexandre DERUMIER" <aderum...@odiso.com>; "Cesar Peschiera" 
<br...@click.com.py>

Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Sunday, September 27, 2015 3:07 AM
Subject: Re: [pve-devel] adding live migration options ? (xbzlre, 
compression, ...)




A question:
I guess that will be compatible with:

/etc/pve/datacenter.cfg :
migration_unsecure: 1

Right?


Yes



Ok, many thanks Dietmar. 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Qemu-img thin provision

2015-09-26 Thread Cesar Peschiera
I am agree with Mir... I have the same problem, sometimes i forget that i 
should to do a click in the option "answer everyone" for that the email 
arrive to "pve-devel@pve.proxmox.com"


---
On Sat, 26 Sep 2015 14:56:03 +0200
Michael Rasmussen  wrote:


This is how your list server is configured. Your list server also adds
the header: Reply-To: original sender 

Which means that a great many MUA's will mail to original sender only
when user hits 'Reply'.


The way to solve it once and for all in a way that works with all MUA's
is this:

Old-Reply-To: original sender 
Reply-To: pve-devel@pve.proxmox.com 
Precedence: list
List-Post: 

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Class, that's the only thing that counts in life.  Class.
Without class and style, a man's a bum; he might as well be dead.
 -- "Bugsy" Siegel

- Original Message - 
From: "Michael Rasmussen" 

To: 
Sent: Saturday, September 26, 2015 9:12 AM
Subject: Re: [pve-devel] Qemu-img thin provision



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding live migration options ? (xbzlre, compression, ...)

2015-09-26 Thread Cesar Peschiera

A question:
I guess that will be compatible with:

/etc/pve/datacenter.cfg :
migration_unsecure: 1

Right?

- Original Message - 
From: "Alexandre DERUMIER" 

To: "dietmar" 
Cc: "pve-devel" 
Sent: Friday, September 25, 2015 7:44 AM
Subject: Re: [pve-devel] adding live migration options ? (xbzlre, 
compression, ...)




I have tested xbzrle & compression.

xbzrle seem to be pretty fine, I don't have bug with it. (tested with 
video player running in guest)

also they are no overhead on 10gbe network

without xbzlre:
Sep 25 12:52:18 migration speed: 1092.27 MB/s - downtime 66 ms

with xbzlre:
Sep 25 13:30:17 migration speed: 1129.93 MB/s - downtime 162 ms



now for compression (default option), I have a big big overhead

Sep 25 13:36:39 migration speed: 352.34 MB/s - downtime 450 ms
and source kvm process jump to 800% (on a xeon-e5 3,1ghz)


I think we can enable xbzrle by default, but for compression,
I really not sure it's a good idea. (depend on cpu/network bandwith)



- Mail original -
De: "dietmar" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Jeudi 24 Septembre 2015 11:45:08
Objet: Re: [pve-devel] adding live migration options ? (xbzlre, 
compression, ...)



>>I think those are special cases, and it is always better to enable
>>compression,
>>because
>>it reduces traffic on the network.

the kvm forum slides, they said that with compression, the migration is 
slower


on 10gbit network.


Don't get me wrong, but this depends on many, many factors. I am sure this 
is

true
if you run a single migration. But the picture is totally different if you 
run
on a real network which is used by many other VMs. Or if you run more than 
one

migration
at same time.


>>But do we already use lz4?
Currently we don't have implemented live migration compress. (it's a new
feature from qemu 2.4)


I'll make a basic patch to enable compression, and will do some benchmark 
with

10gbe network.


Great.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] DRBD backward compatibility

2015-09-26 Thread Cesar Peschiera

Hi Dietmar

Please let me to do three questions about of the future of DRBD in PVE 4:

1- As i have installed DRBD 8.4.5 in my PVE nodes, and they are connected
together in NIC-to-NIC mode (also enabled with balance-rr in pairs of NICs,
jumbo frames, etc.).
So my question is if can i to have this same setup and work with the latest
version of DRBD in PVE?

Notes:
A) I don't use a network common for all the layer the storage (that would
be ideal, but also it would be too costly), so i need to have several DRBD
groups independents in each servers pair.
B) With this setup, i don't need to purchase NICs of 10 Gb/s nor Switches of
10 Gb/s for the layer the storage, and i obtain a very good performance.

2- As i have different setups for different pairs of servers and the DRBD
tuning is different according to Hardware used, i guess that the manual
configuration will be always available, right?

3- As always at principle, i do several tests of performance for know what 
is the best configuration, i guess that DRBD service can be started and 
stopped independently to other services, right?


(This message is a copy of DRBD mailing list)
- Original Message - 
From: "Dietmar Maurer" 

To: "Roland Kammerer" ;

Sent: Saturday, September 12, 2015 8:52 AM
Subject: Re: [DRBD-user] drbdmange: howto configure default plugin in v0.49






On September 12, 2015 at 2:34 PM Roland Kammerer

wrote:


On Sat, Sep 12, 2015 at 02:23:32PM +0200, Dietmar Maurer wrote:
> > There is no best place other than storing this kind of information
> > cluster wide. Again, for you as developer, and only if the default
> > configuration does not fit you needs (e.g., non-default storage
> > plugin),
> > it is a _single_ API call.
>
> I don't understand this argumentation. I just want to change the
> default
> storage plugin, so that our users do not need to change that at all.
>
> So how can I do that?

server.py:
CONF_DEFAULTS dictionary


Thanks, that works. Many thanks for your fast help!

- Dietmar

___
drbd-user mailing list
drbd-u...@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 pve-manager] Allow email adresses with a toplevel domain of up to 63 characters

2015-09-26 Thread Cesar Peschiera
- Original Message - 
From: "Emmanuel Kasper" 

To: 
Sent: Friday, September 18, 2015 6:40 AM
Subject: [pve-devel] [PATCH v2 pve-manager] Allow email adresses with a
toplevel domain of up to 63 characters



This patch allows email adresses of the form john.public@company.hamburg
This fixes the bug: https://bugzilla.proxmox.com/show_bug.cgi?id=716

Note that this patch only deals will the client side validation, a
separate deals with the server side validation
(http://pve.proxmox.com/pipermail/pve-devel/2015-September/017246.html)


Hi

Please, let me to do a question:

Several mail servers require auth for accept a message (for after send it to
addressee), so my question is if is possible add this option in PVE GUI.
(and if is possible, also choose a port number, and a SSL/TLS connection)

Notes:
1)All programs that i know has these options for choose, and i think that
will be fantastic have these options enabled in PVE.

2) When i administer the mail server, and the server is in the LAN, i
configure a relay in the mail server for each PVE node, but such setup isn't
the ideal, and when the mail Server is out of the LAN (ie, in the WAN), such
setup is not recommended (for avoid the problems of SPAM that may have the
computers into the LAN), so i can not configure the send of mails in the PVE
nodes that are into the LAN... :-(

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/2] remove migration lock from config.

2015-09-03 Thread Cesar Peschiera

I guess maintain the blockade of the VM also will be necessary when the disk
image is moved to other storage, also with the VM clone creation, and maybe
also for other functions that now i can't deduce.

But i don't know if the locking are developed for these functions .
(always thinking in the possibility of a task of backup scheduled to be
executed to the halfway through these other manual tasks).

- Original Message - 
From: "Dietmar Maurer" <diet...@proxmox.com>

To: "Wolfgang Link" <w.l...@proxmox.com>; "Cesar Peschiera"
<br...@click.com.py>; <pve-devel@pve.proxmox.com>
Sent: Thursday, September 03, 2015 1:51 AM
Subject: Re: [pve-devel] [PATCH 2/2] remove migration lock from config.



And what will happen if i do a live migration, and halfway through, a
task
scheduled of backup is started?


Yes, I guess we need to keep the lock (and find another solution for the
problem).



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/2] remove migration lock from config.

2015-09-03 Thread Cesar Peschiera

Other thinking:

1- I have a VM in HA enabled.
2- A task of backup scheduled is in process (with the lock respective)
3- In the middle of the backup process, this server that has the VM running, 
breaks down.


The question:
Will work HA correctly for this VM?

- Original Message - 
From: "Cesar Peschiera" <br...@click.com.py>
To: "Dietmar Maurer" <diet...@proxmox.com>; "Wolfgang Link" 
<w.l...@proxmox.com>; "pve-devel" <pve-devel@pve.proxmox.com>

Sent: Thursday, September 03, 2015 2:38 AM
Subject: Re: [pve-devel] [PATCH 2/2] remove migration lock from config.


I guess maintain the blockade of the VM also will be necessary when the 
disk
image is moved to other storage, also with the VM clone creation, and 
maybe

also for other functions that now i can't deduce.

But i don't know if the locking are developed for these functions .
(always thinking in the possibility of a task of backup scheduled to be
executed to the halfway through these other manual tasks).

- Original Message - 
From: "Dietmar Maurer" <diet...@proxmox.com>

To: "Wolfgang Link" <w.l...@proxmox.com>; "Cesar Peschiera"
<br...@click.com.py>; <pve-devel@pve.proxmox.com>
Sent: Thursday, September 03, 2015 1:51 AM
Subject: Re: [pve-devel] [PATCH 2/2] remove migration lock from config.



And what will happen if i do a live migration, and halfway through, a
task
scheduled of backup is started?


Yes, I guess we need to keep the lock (and find another solution for the
problem).





___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/2] remove migration lock from config.

2015-09-02 Thread Cesar Peschiera

Hi Wolfgang

And what will happen if i do a live migration, and halfway through, a task 
scheduled of backup is started?


Same question for migration offline.

- Original Message - 
From: "Wolfgang Link" 

To: 
Sent: Wednesday, September 02, 2015 8:22 AM
Subject: [pve-devel] [PATCH 2/2] remove migration lock from config.



It is not really necessary to use the lock at migragtion.
And it makes problem to remove the lock, because the config is moved
and the cluster_vm list is not updated at this moment.
---
src/PVE/LXC/Migrate.pm | 23 +++
1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index bf6d701..04d49bc 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -74,8 +74,13 @@ sub phase1 {
$self->log('info', "starting migration of CT $self->{vmid} to node 
'$self->{node}' ($self->{nodeip})");


my $conf = $self->{vmconf};
-$conf->{lock} = 'migrate';
-PVE::LXC::write_config($vmid, $conf);
+
+#It is not really necessary to use the lock
+#And it makes problem to remove the lock, because the config is moved
+#and the cluster_vm list is not updated at this moment.
+
+#$conf->{lock} = 'migrate';
+#PVE::LXC::write_config($vmid, $conf);

if ($self->{running}) {
 $self->log('info', "container is running - using online migration");
@@ -152,13 +157,15 @@ sub final_cleanup {

$self->log('info', "start final cleanup");

-my $conf = $self->{vmconf};
-delete $conf->{lock};
+#see note in phase1

-eval { PVE::LXC::write_config($vmid, $conf); };
-if (my $err = $@) {
- $self->log('err', $err);
-}
+#my $conf = $self->{vmconf};
+#delete $conf->{lock};
+
+#eval { PVE::LXC::write_config($vmid, $conf); };
+#if (my $err = $@) {
+ #$self->log('err', $err);
+#}
}

1;
--
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] The network performance future for VMs

2015-08-24 Thread Cesar Peschiera

Hi Alexandre
Many thanks for your prompt reply

And please, if you can, let me to know the results of your tests

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Sunday, August 23, 2015 6:50 AM
Subject: Re: [pve-devel] The network performance future for VMs


Hi Cesar,

I will try to done tests again next week with qemu 2.4 and last virtio 
driver,

with different windows versions;





- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 23 Août 2015 09:10:42
Objet: Fw: [pve-devel] The network performance future for VMs

Hi Alexandre again

Thanks for your prompt reply, and please, let me to understand better...

Only as a memory refresh, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104808#post104808
2 queues, more don't improve performance. (queues are only for inbound
traffic).
For outbound traffic, as far I remember, the difference is huge between
2008r2 and 2012r2. (something like 1,5 vs 6gbits).

Now, is different the speed of the net with win2k8r2?...
... If it is correct, let me to ask you what is your setup?

Moreover, in this link (Virtio Win Drivers):
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG

NEWS :
I see a data maybe very interesting:
* Thu Jun 04 2015 Cole Robinson crobi...@redhat.com - 0.1.105-1
- Update to virtio-win-prewhql-0.1-105
- BZ 1223426 NetKVM Fix for performance degradation with multi-queue

Maybe, with this version of the driver, the net driver for Win2k8r2 be more
fast.
(and for any Windows systems)
I would like to hear your opinion, specially for Win2k8r2.

Best regards
Cesar

- Original Message - 

From: Alexandre DERUMIER aderum...@odiso.com
To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, August 19, 2015 5:08 AM
Subject: Re: [pve-devel] The network performance future for VMs



Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not
for
VMs Windows... is it correct???


I really don't known ;) I thinked that qemu support was enough, maybe not
...



Moreover, all this questions is due to that i want to improve the speed
of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of
10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get
10
Gb/s in each link.


From my tests, I get a lot better performance with win2012r2 and win2K8r2.

and also, you can try nic multiqueue feature, it should improve
performance
with multiple streams.

Don't remember, but I think I was around 7-8gigabit for 1 windows vm.
But still a lot lower than linux vm.


now, it's clear that dpdk should improve performance for high pps.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 19 Août 2015 10:50:37
Objet: Re: [pve-devel] The network performance future for VMs

Thanks Alexandre for your prompt response.


Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland


I forgot say that OVS was configured in two ports for the LAN link, and
other two ports with the Linux stack for DRBD in blance-rr NIC-to-NIC (OVS
not have the option balance-rr).
In this case is that i had problems with DRBD, then, I preferred disable
totally in my servers the OVS setup.


I'm not sure, but maybe dpkg on linux stack can only work with host
physical interfaces and not qemu virtual interfaces.

dpkg?, i assume that you want to say DPDK.

Please see in this link:
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not for
VMs Windows... is it correct???

Moreover, all this questions is due to that i want to improve the speed of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of 10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get
10
Gb/s in each link.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, August 19, 2015 1:39 AM
Subject: Re: [pve-devel] The network performance future for VMs



So now my question is if DPDK can

Re: [pve-devel] The network performance future for VMs

2015-08-24 Thread Cesar Peschiera

A tip:

A graphical freeware for windows (32 and 64 Bits) easy to use (I don't have
tested): LANBench

Downloadable from here:
http://www.zachsaw.com/?pg=lanbench_tcp_network_benchmark

If is possible, when you do the new tests, also i would like to hear your
opinion about of this software.

- Original Message - 
From: Cesar Peschiera br...@click.com.py

To: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, August 24, 2015 3:21 AM
Subject: Re: [pve-devel] The network performance future for VMs



Hi Alexandre
Many thanks for your prompt reply

And please, if you can, let me to know the results of your tests

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Sunday, August 23, 2015 6:50 AM
Subject: Re: [pve-devel] The network performance future for VMs


Hi Cesar,

I will try to done tests again next week with qemu 2.4 and last virtio
driver,
with different windows versions;





- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 23 Août 2015 09:10:42
Objet: Fw: [pve-devel] The network performance future for VMs

Hi Alexandre again

Thanks for your prompt reply, and please, let me to understand better...

Only as a memory refresh, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104808#post104808
2 queues, more don't improve performance. (queues are only for inbound
traffic).
For outbound traffic, as far I remember, the difference is huge between
2008r2 and 2012r2. (something like 1,5 vs 6gbits).

Now, is different the speed of the net with win2k8r2?...
... If it is correct, let me to ask you what is your setup?

Moreover, in this link (Virtio Win Drivers):
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG

NEWS :
I see a data maybe very interesting:
* Thu Jun 04 2015 Cole Robinson crobi...@redhat.com - 0.1.105-1
- Update to virtio-win-prewhql-0.1-105
- BZ 1223426 NetKVM Fix for performance degradation with multi-queue

Maybe, with this version of the driver, the net driver for Win2k8r2 be
more
fast.
(and for any Windows systems)
I would like to hear your opinion, specially for Win2k8r2.

Best regards
Cesar

- Original Message - 

From: Alexandre DERUMIER aderum...@odiso.com
To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, August 19, 2015 5:08 AM
Subject: Re: [pve-devel] The network performance future for VMs



Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not
for
VMs Windows... is it correct???


I really don't known ;) I thinked that qemu support was enough, maybe not
...



Moreover, all this questions is due to that i want to improve the speed
of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of
10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for
get
10
Gb/s in each link.


From my tests, I get a lot better performance with win2012r2 and
win2K8r2.

and also, you can try nic multiqueue feature, it should improve
performance
with multiple streams.

Don't remember, but I think I was around 7-8gigabit for 1 windows vm.
But still a lot lower than linux vm.


now, it's clear that dpdk should improve performance for high pps.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 19 Août 2015 10:50:37
Objet: Re: [pve-devel] The network performance future for VMs

Thanks Alexandre for your prompt response.


Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland


I forgot say that OVS was configured in two ports for the LAN link, and
other two ports with the Linux stack for DRBD in blance-rr NIC-to-NIC
(OVS
not have the option balance-rr).
In this case is that i had problems with DRBD, then, I preferred disable
totally in my servers the OVS setup.


I'm not sure, but maybe dpkg on linux stack can only work with host
physical interfaces and not qemu virtual interfaces.

dpkg?, i assume that you want to say DPDK.

Please see in this link:
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not
for
VMs Windows... is it correct???

Moreover, all this questions is due to that i want to improve the speed
of
the network in a VM Win2k8 r2
... is there anything that we can do for get better

Re: [pve-devel] openvswitch-2.4.0 released

2015-08-24 Thread Cesar Peschiera

Also for DPDK vHost

A complete list of features here (all versions):
http://openvswitch.org/releases/NEWS-2.4.0

Note: on PVE, i had problems with DRBD 8.4.5 working with the Linux Stack, 
while that in others NICs, OVS was enabled only for LAN communication, so i 
had that disable completely OVS in PVE, and after of it DRBD was working 
like a charm.


- Original Message - 
From: Michael Rasmussen m...@datanom.net

To: pve-devel@pve.proxmox.com
Sent: Monday, August 24, 2015 2:53 PM
Subject: [pve-devel] openvswitch-2.4.0 released


Hi all,



openvswitch-2.4.0 released.



Notably:
Support for multicast snooping (IGMPv1, IGMPv2 and IGMPv3)



http://openvswitch.org/releases/NEWS-2.4.0



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] openvswitch-2.4.0 released

2015-08-24 Thread Cesar Peschiera

I am not sure about of virtio-win drivers and the support to DPDK, why?:
- In this link:
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG
I don't see any info about of DPDK.

- In your Web link, i read this:
A virtio-net back-end implementation providing a subset of virtio-net
features

Finally, I think that the back-end does not come with the virtio-net driver 
for the windows systems, so i don't understand why do you think that with 
the virtio-win driver we have the support to DPDK?

(I hope i'm wrong)

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: datanom.net m...@datanom.net; pve-devel pve-devel@pve.proxmox.com
Sent: Monday, August 24, 2015 10:40 PM
Subject: Re: [pve-devel] openvswitch-2.4.0 released



Also for DPDK vHost


Great :)

About dpdk and windows, I have found informations here:

http://www.ran-lifshitz.com/2015/06/17/open-vswitch-netdev-dpdk-with-vhost-user-support/

Seem that before vhost-user support (include in ovs 2.4), it was already
possible to use dpdk,
but that need to have dpdk librairies running inside guest.

with vhost-user, a simply virtio-net should work (and it should work with
windows too ;)



- Mail original -
De: Cesar Peschiera br...@click.com.py
À: datanom.net m...@datanom.net, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 25 Août 2015 01:27:27
Objet: Re: [pve-devel] openvswitch-2.4.0 released

Also for DPDK vHost

A complete list of features here (all versions):
http://openvswitch.org/releases/NEWS-2.4.0

Note: on PVE, i had problems with DRBD 8.4.5 working with the Linux Stack,
while that in others NICs, OVS was enabled only for LAN communication, so i
had that disable completely OVS in PVE, and after of it DRBD was working
like a charm.

- Original Message - 
From: Michael Rasmussen m...@datanom.net

To: pve-devel@pve.proxmox.com
Sent: Monday, August 24, 2015 2:53 PM
Subject: [pve-devel] openvswitch-2.4.0 released


Hi all,



openvswitch-2.4.0 released.



Notably:
Support for multicast snooping (IGMPv1, IGMPv2 and IGMPv3)



http://openvswitch.org/releases/NEWS-2.4.0



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] openvswitch-2.4.0 released

2015-08-24 Thread Cesar Peschiera

So they do not support kernel 4.1??


Maybe not yet, but but it seems that there is a patch in progress...
http://permalink.gmane.org/gmane.network.openvswitch.devel/51364
http://permalink.gmane.org/gmane.network.openvswitch.devel/51443



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Fw: The network performance future for VMs

2015-08-23 Thread Cesar Peschiera

Hi Alexandre again

Thanks for your prompt reply, and please, let me to understand better...

Only as a memory refresh, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104808#post104808
2 queues, more don't improve performance. (queues are only for inbound 
traffic).
For outbound traffic, as far I remember, the difference is huge between 
2008r2 and 2012r2. (something like 1,5 vs 6gbits).


Now, is different the speed of the net with win2k8r2?...
... If it is correct, let me to ask you what is your setup?

Moreover, in this link (Virtio Win Drivers):
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG

NEWS :
I see a data maybe very interesting:
* Thu Jun 04 2015 Cole Robinson crobi...@redhat.com - 0.1.105-1
- Update to virtio-win-prewhql-0.1-105
- BZ 1223426 NetKVM Fix for performance degradation with multi-queue

Maybe, with this version of the driver, the net driver for Win2k8r2 be more 
fast.

(and for any Windows systems)
I would like to hear your opinion, specially for Win2k8r2.

Best regards
Cesar

- Original Message - 

From: Alexandre DERUMIER aderum...@odiso.com
To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, August 19, 2015 5:08 AM
Subject: Re: [pve-devel] The network performance future for VMs



Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not 
for

VMs Windows... is it correct???


I really don't known ;)  I thinked that qemu support was enough, maybe not
...


Moreover, all this questions is due to that i want to improve the speed 
of

the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of 
10

Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get
10
Gb/s in each link.


From my tests, I get a lot better performance with win2012r2 and win2K8r2.

and also, you can try nic multiqueue feature, it should improve 
performance

with multiple streams.

Don't remember, but I think I was around 7-8gigabit for 1 windows vm.
But still a lot lower than linux vm.


now, it's clear that dpdk should improve performance for high pps.



- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 19 Août 2015 10:50:37
Objet: Re: [pve-devel] The network performance future for VMs

Thanks Alexandre for your prompt response.


Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland


I forgot say that OVS was configured in two ports for the LAN link, and
other two ports with the Linux stack for DRBD in blance-rr NIC-to-NIC (OVS
not have the option balance-rr).
In this case is that i had problems with DRBD, then, I preferred disable
totally in my servers the OVS setup.


I'm not sure, but maybe dpkg on linux stack can only work with host
physical interfaces and not qemu virtual interfaces.

dpkg?, i assume that you want to say DPDK.

Please see in this link:
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not for
VMs Windows... is it correct???

Moreover, all this questions is due to that i want to improve the speed of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of 10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get 
10

Gb/s in each link.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, August 19, 2015 1:39 AM
Subject: Re: [pve-devel] The network performance future for VMs


So now my question is if DPDK can be activated also with the Linux 
stack?.


I need to dig a little more about this.
Intel seem to push the ovs-dpdk in all conferenfece I have see.
(Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland)


I'm not sure, but maybe dpkg on linux stack can only work with host 
physical

interfaces and not qemu virtual interfaces.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 18 Août 2015 21:25:46
Objet: Re: [pve-devel] The network performance future for VMs

Oh, ok.

In the past, i had problems with DRBD 8.4.5 when OVS is enabled, so i had
that change my

[pve-devel] Restore quickly by CLI a backup when i have different disks in destination

2015-08-20 Thread Cesar Peschiera

Hi to all

Excuse me please if i do a question in the incorrect site, but i did a same
question in different dates in the PVE forum, and no one has responded me
(From June-28-2015).

The question is: how restore quickly by CLI a backup of a VM when i have
several virtual disks on the backup and in destination for restore?

If anybody know the answer, please go to this link, read the detail, and
answer me:
http://forum.proxmox.com/threads/23300-By-CLI-restore-quickly-a-backup-when-i-have-different-disks-in-destination

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] removed ISO files block migration

2015-08-20 Thread Cesar Peschiera
- Original Message - 
From: Wolfgang Link w.l...@proxmox.com
To: Stefan Priebe - Profihost AG s.pri...@profihost.ag; 
pve-devel@pve.proxmox.com

Sent: Thursday, August 20, 2015 3:52 AM
Subject: Re: [pve-devel] removed ISO files block migration


I think we should eject it automatically instead of do not start the 
machine!


This could be done easily if we check this on start.



+1 for this.
Simply do an eject before migration

Also applicable for HA.

But with caution, for example, if i need to start the VM with a live cd... 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] removed ISO files block migration

2015-08-20 Thread Cesar Peschiera

I  would do it this way. skip cdrom and set it.


+1
Set it to none in the id.conf of the VM

Also applicable for HA.
But with caution, for example, if i need to start the VM manually with a 
live cd...



- Original Message - 
From: Wolfgang Link w.l...@proxmox.com
To: Stefan Priebe - Profihost AG s.pri...@profihost.ag; Alexandre 
DERUMIER aderum...@odiso.com

Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, August 20, 2015 4:50 AM
Subject: Re: [pve-devel] removed ISO files block migration



I  would do it this way. skip cdrom and set it.

On 08/20/2015 10:41 AM, Stefan Priebe - Profihost AG wrote:

I'm currently struggling.

There is sync_disks method in QemuMigrate.pm which might be good for
this. But the code is failing before in sub prepare calling
PVE::Storage::activate_volumes($self-{storecfg}, $vollist);

sub activate_volume {
 my ($class, $storeid, $scfg, $volname, $exclusive, $cache) = @_;

 my $path = $class-filesystem_path($scfg, $volname);

 # check is volume exists
 if ($scfg-{path}) {
 die volume '$storeid:$volname' does not exist\n if ! -e $path;

So where to implement this?

Another method in prepare before calling activate_volume to cleanup the
config?

Another approach would be to skip cdroms in those checks completely and
set a cdrom in vm_start to none if the iso does not exist.

Stefan
Am 20.08.2015 um 10:33 schrieb Alexandre DERUMIER:

I think we should eject it automatically instead of do not start the
machine!

+1 for this.
Simply do an eject before migration, if the iso is local


- Mail original -
De: Wolfgang Link w.l...@proxmox.com
À: Stefan Priebe s.pri...@profihost.ag, pve-devel 
pve-devel@pve.proxmox.com

Envoyé: Jeudi 20 Août 2015 09:52:34
Objet: Re: [pve-devel] removed ISO files block migration

I think we should eject it automatically instead of do not start the
machine!

This could be done easily if we check this on start.

On 08/20/2015 09:39 AM, Stefan Priebe - Profihost AG wrote:

Hi,

i've no real idea how to solve this.

Currently the following happens pretty easily.

You insert a cd iso image into your VM CD ROM. May be driver disk for
windows virtio-X.

Than some month later somebody updates this one from virtio-X to
virtio-Y and deletes the old iso files.

 From now on every attempt to migrate this VM fails with:
volume 'cdisoimages:iso/virtio-win-0.1-100.iso' does not exist

It would be great if we can handle this.

Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] The network performance future for VMs

2015-08-19 Thread Cesar Peschiera

Thanks Alexandre for your prompt response.


Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland


I forgot say that OVS was configured in two ports for the LAN link, and
other two ports with the Linux stack for DRBD in blance-rr NIC-to-NIC (OVS
not have the option balance-rr).
In this case is that i had problems with DRBD, then, I preferred disable
totally in my servers the OVS setup.


I'm not sure, but maybe dpkg on linux stack can only work with host
physical interfaces and not qemu virtual interfaces.

dpkg?, i assume that you want to say DPDK.

Please see in this link:
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not for
VMs Windows... is it correct???

Moreover, all this questions is due to that i want to improve the speed of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of 10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get 10 
Gb/s in each link.



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, August 19, 2015 1:39 AM
Subject: Re: [pve-devel] The network performance future for VMs



So now my question is if DPDK can be activated also with the Linux stack?.


I need to dig a little more about this.
Intel seem to push the ovs-dpdk in all conferenfece I have see.
(Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland)


I'm not sure, but maybe dpkg on linux stack can only work with host physical
interfaces and not qemu virtual interfaces.



- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 18 Août 2015 21:25:46
Objet: Re: [pve-devel] The network performance future for VMs

Oh, ok.

In the past, i had problems with DRBD 8.4.5 when OVS is enabled, so i had
that change my setup from OVS to the Linux stack, after of it, i had no more
problems with DRBD.

About of the problem with OVS and DRBD, i did not test in depth the problem
(in the season of preproduction phase), but if i not bad remember, maybe the
problem appears when OVS Intport is enabled, or maybe only when OVS is
enabled in the setup.

I was using PVE 3.3

So now my question is if DPDK can be activated also with the Linux stack?.

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, August 18, 2015 8:57 AM
Subject: Re: [pve-devel] The network performance future for VMs



So, i would like to ask about of the future of PVE in network performance
terms.


dpdk will be implemented in openvswitch through vhost-user,
I'm waiting for ovs 2.4 to look at this.


- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 18 Août 2015 13:00:59
Objet: [pve-devel] The network performance future for VMs

Hi developers of PVE

I would like to talk about of the network speed for VMs:

I see in this link (Web official of Red Hat):
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

In the page 19 of this pdf, i see a interesting info:
Network Function Virtualization (NFV)
Throughput and Packets/sec RHEL7.x + DPDK (Data Plane Development Kit):

Millons packets per second:
KVM = 208
Docker = 215
Bare-metal = 218
HW maximum = 225

Between KVM and Bare-metal, the difference is little: 10

Also i see a list of HW NICs compatibility on this link:
http://dpdk.org/doc/nics

So, i would like to ask about of the future of PVE in network performance
terms.

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] The network performance future for VMs

2015-08-18 Thread Cesar Peschiera

Oh, ok.

In the past, i had problems with DRBD 8.4.5 when OVS is enabled, so i had 
that change my setup from OVS to the Linux stack, after of it, i had no more 
problems with DRBD.


About of the problem with OVS and DRBD, i did not test in depth the problem 
(in the season of preproduction phase), but if i not bad remember, maybe the 
problem appears when OVS Intport is enabled, or maybe only when OVS is 
enabled in the setup.


I was using PVE 3.3

So now my question is if DPDK can be activated also with the Linux stack?.

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, August 18, 2015 8:57 AM
Subject: Re: [pve-devel] The network performance future for VMs



So, i would like to ask about of the future of PVE in network performance
terms.


dpdk will be implemented in openvswitch through vhost-user,
I'm waiting for ovs 2.4 to look at this.


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 18 Août 2015 13:00:59
Objet: [pve-devel] The network performance future for VMs

Hi developers of PVE

I would like to talk about of the network speed for VMs:

I see in this link (Web official of Red Hat):
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

In the page 19 of this pdf, i see a interesting info:
Network Function Virtualization (NFV)
Throughput and Packets/sec RHEL7.x + DPDK (Data Plane Development Kit):

Millons packets per second:
KVM = 208
Docker = 215
Bare-metal = 218
HW maximum = 225

Between KVM and Bare-metal, the difference is little: 10

Also i see a list of HW NICs compatibility on this link:
http://dpdk.org/doc/nics

So, i would like to ask about of the future of PVE in network performance
terms.

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Implement resize for the DRBD backend.

2015-08-04 Thread Cesar Peschiera

 -#my $cmd = ['/sbin/lvextend', '-L', $size, $path];
 -#run_command($cmd, errmsg = error resizing volume '$path');
 +# FIXME if there's ever more than one volume in a resource

 not sure if we ever want to support multiple volumes inside one
 resource?
 Why would we want to do that?
So that the volumes use a common write stream across the network.

If you have eg. a database that uses 3 volumes (data, log data,
write-ahead
log), you want to have these three at the *same* point in time.

When one of 3 connections breaks, then the other two volumes could run
ahead - and if the Primary node breaks down at that time, the Secondary
wouldn't have 3 consistent volumes, which might lead to troubles.


Ah, OK.


Hi Dietmar
I am agree with Philipp Marek, but such setup has his disadvantages...

For example, i have a VM with 4 volumes in different arrays of disks
(properly installed with SAS 15K 600GB. and several RAID-10 and a RAID-1),
also i have several NICs 10Gb/s connected in mode NIC-to-NIC with
bonding-rr, and finally, to crontab i have added a script that do a
verification of all replicated DRBD volumes, and it is executed once for
week.

Then, with this setup, if i have a DRBD resource for each volume, i can do
the verification of the 4 volumes simultaneously, otherwise, ie if i have a
DRBD resource for several volumes, DRBD can not do the verification
simultaneously for each volume, and this task will take a long time for
finish.

Moreover, as i have several NICs for my DRBD resources, i have the advantage
of use a total higher bandwidth for these simultaneous verifications (the
sum of bandwidth available of each NIC). Ie, that if i have several volumes
in a only DRBD resource, i can not take advantage of a total higher
bandwidth using the others NICs available.

Moreover, with DRBD, in dual primary mode, it is very easy resynchronize the
resources online, ie, without turn off nothing, and if i have each DRBD
resource separated in NICs, while that the resynchronize is in progress, it
will finish in less time.

Moreover, how much is the possibility that a hard disk, or one DRBD resource 
breaks down, and simultaneously or in a short time, a Server ???


Finally, and if is possible, i would like to have the option of configure 
only a volume for each DRBD resource for each VM.


Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Network configuration

2015-07-22 Thread Cesar Peschiera
I had problems with DRBD 8.4.5 using dedicated NICs in linux-bridge 
(NIC-to-NIC) and in mode balance-rr and OVS IntPort for the LAN 
communication, and DRBD don't work correctly.


I have not tried DRBD if OVS is configured for the LAN without OVS IntPort, 
simply i changed the LAN communication from OVS to linux-bridge, and DRBD 
worked out as expected.



- Original Message - 
From: Michael Rasmussen m...@datanom.net

To: pve-devel@pve.proxmox.com
Sent: Tuesday, July 21, 2015 7:25 PM
Subject: [pve-devel] Network configuration


Second. If the above is implemented then in my opinion using
openvswitch is far superior to Linux Bridge/Bond and gives greater
flexibility while at the same time is more logical and simple in use in
which case I propose that the installer change default network
configuration to use openvswitch in proxmox 4.0.

What is your opinion to this suggestion?







___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] vzdump : exclude iothread disks

2015-07-20 Thread Cesar Peschiera
Can we raise an error instead? The user can then set backup=no for those 
drives.


Or maybe will be better that in the window of creation of hard disk, we can 
see that the option no backup is marked, as also this option not must be 
modifiable, and in a gray background.


And maybe a little comment that say for this kind of disk this option isn't 
enabled


The idea is that  since his creation, the common user know that for this 
kind of disk will not have the option of do a backup.


Best regards
Cesar

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Alexandre Derumier aderum...@odiso.com; pve-devel@pve.proxmox.com
Sent: Monday, July 20, 2015 12:28 PM
Subject: Re: [pve-devel] [PATCH] vzdump : exclude iothread disks



Currently backup don't work with iothread feature, and crash qemu

For now, disable backup for theses drives until backup code is fixed.


Can we raise an error instead? The user can then set backup=no for those 
drives.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-11 Thread Cesar Peschiera

Yes, that makes sense to me.


Or maybe the PVE GUI also can has a third option for use pgzip, and the user 
also can select the amount of cores to used, so in this case, also maybe 
will be better add a message of caution that say the use of many cores can 
slowing down his VMs. At least for me, it would be fantastic.


- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Eric Blevins ericlb...@gmail.com
Cc: pve-u...@pve.proxmox.com; pve-devel@pve.proxmox.com
Sent: Saturday, July 11, 2015 12:27 AM
Subject: Re: [pve-devel] backup : add pigz compressor



You could even make it so using pigz requires a setting in
vzdump.conf. So in GUI you still can only select gzip or lzop.
If set to gzip and vzdump.conf has pigz:on then pigz is used instead of 
gzip.

Most novice users are only going to use the GUI, this would reduce the
likelyhood of them turning on pigz and then complaiing about their
decision.


Yes, that makes sense to me. Someone want to write a patch?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] unknown setting 'maxcpus'

2015-07-06 Thread Cesar Peschiera
That's wrong because they exist also a thread option for processor (we 
don't use it currently).
That is the point, i believe that incomplete options that aren't use it 
currently in the PVE GUI can do a confusion for those with less knowledges, 
and finally, will run their VMs more slowly.



full topology  = processor * cores * threads

That's right, but the PVE GUI doesn't show the topology of this mode.


(like for hyperthreading, you can have multiple threads by core)
That is the point, a non expert user can believe that in the PVE GUI, for 
each core selected, all his corresponding sub-threads will be included.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Monday, July 06, 2015 3:01 AM
Subject: Re: [pve-devel] unknown setting 'maxcpus'



Maybe will be better change the terms in the GUI of PVE, ie, for create or
change the CPU configuration, instead say Cores, must say Threads of
Processor, of this mode, all people will know that the threads also are
considered when should choose the configuration more right.


That's wrong because they exist also a thread option for processor (we don't 
use it currently).


full topology  = processor * cores * threads

(like for hyperthreading, you can have multiple threads by core)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: dietmar diet...@proxmox.com, aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 5 Juillet 2015 16:33:04
Objet: Re: [pve-devel] unknown setting 'maxcpus'

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Stefan Priebe s.pri...@profihost.ag; Alexandre DERUMIER
aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, July 03, 2015 5:08 PM
Subject: Re: [pve-devel] unknown setting 'maxcpus'



cores*socket is the maximum, so vcpus needs to be smaller/equal.


Maybe will be better change the terms in the GUI of PVE, ie, for create or
change the CPU configuration, instead say Cores, must say Threads of
Processor, of this mode, all people will know that the threads also are
considered when should choose the configuration more right.

Regards
Cesar 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] unknown setting 'maxcpus'

2015-07-05 Thread Cesar Peschiera
- Original Message - 
From: Dietmar Maurer diet...@proxmox.com
To: Stefan Priebe s.pri...@profihost.ag; Alexandre DERUMIER 
aderum...@odiso.com

Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, July 03, 2015 5:08 PM
Subject: Re: [pve-devel] unknown setting 'maxcpus'



cores*socket is the maximum, so vcpus needs to be smaller/equal.


Maybe will be better change the terms in the GUI of PVE, ie, for create or 
change the CPU configuration, instead say Cores, must say Threads of 
Processor, of this mode, all people will know that the threads also are 
considered when should choose the configuration more right.


Regards
Cesar 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] A strategy if the upgrade of wheezy to jessie isn't possible

2015-05-12 Thread Cesar Peschiera

Oh, excuse me please.

I meant to say for the migration of Debian wheezy to Jessie with PVE 4.0
always that the  migration online isn't possible.
¿is a good idea? or ¿do you have other best idea?.

Moreover,  about of our conversation, what do you meant with you need to
re-install?

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py; pve-devel
pve-devel@pve.proxmox.com
Sent: Monday, May 11, 2015 11:42 AM
Subject: Re: A strategy if the upgrade of wheezy to jessie isn't possible



I guess that in case of be need the reinstallation of PVE, can be a good
strategy based in stages of short delay of time, as i explain to
continuation:

Stage 1:
- PVE releases the packets of the cluster communication of PVE 4.0 for
PVE
3.4 in the test repository of PVE 3.4.


... but this is not what I meant by you need to re-install



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Question about of the future of DRBD9 in PVE

2015-05-12 Thread Cesar Peschiera
Ie, with PVE as VM for do the test of DRBD (with a files system (ext3) on 
top of virtual disk ).


Do you think that will be a method valid?

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py; pve-devel
pve-devel@pve.proxmox.com
Sent: Monday, May 11, 2015 11:34 AM
Subject: Re: [pve-devel] Question about of the future of DRBD9 in PVE





If i can, i will with pleasure. (further, i will need to buy some pieces
of
hardware)

Maybe I can do it in VMs


Sure, you can test everything with VMs



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] A strategy if the upgrade of wheezy to jessie isn'tpossible

2015-05-12 Thread Cesar Peschiera

Alexandre, many thanks for the information.

And please, let me to do a questions:
1) Should i change the information of his repositories for his future 
updates? (in terms of  Debian Jessie and PVE no-subscription), as also, what 
text should have?.


2) Is this upgrade stable for PVE?, or is only for development purposes? ( i 
don't see any information about that in the forum portal)


3) In this link i see a ISO file for download named proxmox-upgrade-4.0:
http://www.proxmox.com/en/downloads
The questions is: for what is useful this ISO file?

Many thanks again
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, May 12, 2015 6:08 AM
Subject: Re: [pve-devel] A strategy if the upgrade of wheezy to jessie 
isn'tpossible




Oh, it is OK!, and where can i find the procedure?


Here my notes

convert cluster.conf to /etc/corosync/corosync.conf
generate /etc/corosync/authkey
#cp /etc/corosync/corosync.conf
#touch /proxmox_install_mode
#apt-get update
#apt-get upgrade
#apt-get remove clvm vzctl
#apt-get dist-upgrade
#apt-get install clvm lxc
rm /proxmox_install_mode
#killall -9 corosync
#corosync
#/etc/init.d/pve-cluster restart
#/etc/init.d/pvedaemon restart
#/etc/init.d/pveproxy restart
#/etc/init.d/pvestatd restart


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: dietmar diet...@proxmox.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 12 Mai 2015 11:32:12
Objet: Re: [pve-devel] A strategy if the upgrade of wheezy to jessie 
isn'tpossible


Hi Alexandre


I have done the upgrade to jessie online, without interruption
(but I don't use drbd, only remote storage which is more easier).


Oh, it is OK!, and where can i find the procedure? 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] A strategy if the upgrade of wheezy to jessieisn'tpossible

2015-05-12 Thread Cesar Peschiera

The proxmox-upgrade-4.0 you see is a service pack for the Mail Gateway,
and is unrelated to the Virtualisation Platform.



pve running on top of debian jessie is still devel-only at the momment.
It is not a recommanded production environnment ( this is why we chat
about it in pve-devel ;)


Many thanks Emmanuel

Now all is clear for me.

Best regards
Cesar
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] A strategy if the upgrade of wheezy to jessie isn'tpossible

2015-05-12 Thread Cesar Peschiera

Hi Alexandre

I have done the upgrade to jessie online, without interruption 
(but I don't use drbd, only remote storage which is more easier).


Oh, it is OK!, and where can i find the procedure?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Question about of the future of DRBD9 in PVE

2015-05-11 Thread Cesar Peschiera

Hi Dietmar

I would like to ask about of the future of DRBD in PVE, so please let me to 
explain my setup, my concern, and finally ask a question.


This is my setup:
- I am using DRBD version 8.4.x in PVE.
- Dedicated network for DRBD in bonding balance-rr in pairs of network 
interfaces, always connected NIC-to-NIC all interfaces of DRBD, several of 
these NICs are of 10 Gb/s. for earn more speed.
- Some of these servers has DRBD and a RAID controller with BBU, RAID10, 
several HDDs SAS 15K, a fast database in a VM (with a lot of RAM), etc.


These are my concerns:
- The possible problem that i will have (and maybe also much people will 
have) is this:
If with the next release of PVE with DRBD9, i can not preserve this same 
physical setup of net for DRBD9, perhaps will must to purchase a managed 
switch of 10 GB/s for use exclusive in the network connections of DRBD (and 
maybe should to purchase 2 managed switch stacked for obtain HA in this 
net of DRBD).

- The problem is that Switches of 10 GB/s are very expensive.

My doubt:
As i don't have practice with DRBD9, i would like to ask you if  can i 
preserve this physical setup of net for DRBD9 with the next release of PVE?, 
then, avoid the purchase of these expensive new switches, and get that DRBD 
working properly in a similar mode to the old version that i was using, 
speaking explicitly about of his synchronous replication only between pairs 
of nodes.


Awaiting your reply, i say see you soon.

Best regards
Cesar


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Question about of the future of DRBD9 in PVE

2015-05-11 Thread Cesar Peschiera

Dietmar, thanks for your prompt reply

For me, reinstall the PVE servers is possible the sundays or holidays always
that the new PVE version has compatibility with the old version of PVE 
:-(
(I have a group of servers and isn't a work of a pair of days when DRBD is
included as part of the working, being that his delay initial time to
complete his first synchronization of storages between nodes takes quite).

But my real concern is about the compatibility in DRBD9.
If you or anybody can know about of DRBD and his backward compatibility,
talking in terms of physical network setup in mode NIC-to-NIC for the
synchronous replication between pairs of nodes (or maybe in a near future),
please, say so here, this topic is very worrying for me.

Or if i can, i will do the test and will comment it here

Moreover, i guess that the PVE GUI will support DRBD, is right?

Best regards
Cesar

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py; pve-devel
pve-devel@pve.proxmox.com
Sent: Monday, May 11, 2015 5:33 AM
Subject: Re: [pve-devel] Question about of the future of DRBD9 in PVE



As i don't have practice with DRBD9, i would like to ask you if  can i
preserve this physical setup of net for DRBD9 with the next release of
PVE?,


I guess you can still use such network setup, but maybe you need
to re-install the servers. AFAIk upgrade existing setups does
not work so far.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Question about of the future of DRBD9 in PVE

2015-05-11 Thread Cesar Peschiera
If i can, i will with pleasure. (further, i will need to buy some pieces of 
hardware)


Maybe I can do it in VMs with virtio-net, right?



- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py; pve-devel
pve-devel@pve.proxmox.com
Sent: Monday, May 11, 2015 8:01 AM
Subject: Re: [pve-devel] Question about of the future of DRBD9 in PVE



Or if i can, i will do the test and will comment it here

Moreover, i guess that the PVE GUI will support DRBD, is right?


I suggest you want until we release a first beta. Then you can test
yourself to see if it works for you (would be great to have
additional testers).



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu incremental backup merged in qemu master

2015-05-07 Thread Cesar Peschiera

Hi developers team

If is possible, i would like to know if anyone have interest in add some
features to PVE.

Since long time ago i would like to have some features in Proxmox, and it is
that in PVE GUI we can apply a restore of VM, but with the selection of a
specific storage for each image of disk that is to restore, instead of
restore all the images of the virtual disks in a single storage.

Also i think that would a good option of that we can choose for the
restoration, the disk images that should be restored.

I believe that these two features will be very useful for gain time in a
restoration of backup when the VM have few or several images of disks
enabled in different arrays of disks, or when the volumes of the images of
disks has a large size.

This features have as purpose avoid a set of steps that manually we must
execute by CLI, that takes much longer to finish a restoration successful.
Moreover, these new features they will avoid the need of have much more
space available in disk for execute such intermediates procedures.

In addition to what i explained above, if is possible add the option of
incremental backups, i think that PVE will have a complete solution of
backups and restorations very useful for all kind of cases.

The best of success for you all
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, May 05, 2015 7:34 AM
Subject: [pve-devel] qemu incremental backup merged in qemu master



http://git.qemu.org/qemu.git?p=qemu.gita=searchh=HEADst=commits=backup

yeaaahh :)


(memory unplug should be merged soon too)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Default cache mode for VM hard drives

2015-04-13 Thread Cesar Peschiera

Hi Stanislav

Excuse me please, but your link don't tell me nothing about of the root of 
the problem of oos in DRBD (assuming that the directive 
data-integrity-alg is disabled).


Also, i have configured in the lvm.conf file write_cache_state = 0 (may be 
this can help you, that is other recomendation of Linbit)


I think that the tuning of DRBD is the key for the success, that i did in 
workstations and real servers, and never had problems of oos, always with 
all firmwares of hardware updated and with NICs Intel of 1 Gb/s or 10 Gb/s 
with bonding balance-rr exclusive for replication of DRBD (NIC-to-NIC, and i 
don't know if will work well with the broadcom brand or with other brands, I 
never did a test).


In real servers, with I/OAT engine enabled in the BIOS, and with Intel NICs, 
you get better performance (and in my case, without get  a oos)


Best regards
Cesar

- Original Message - 
From: Stanislav German-Evtushenko

To: Cesar Peschiera
Cc: Alexandre DERUMIER ; pve-devel
Sent: Monday, April 13, 2015 12:12 PM
Subject: Re: [pve-devel] Default cache mode for VM hard drives


Hi Cesar,


Out of sync with cache=directsync happen in very specific cases. Here is the 
decription of one of them: 
http://forum.proxmox.com/threads/18259-KVM-on-top-of-DRBD-and-out-of-sync-long-term-investigation-results?p=108099#post108099



Best regards,

Stanislav




On Mon, Apr 13, 2015 at 7:02 PM, Cesar Peschiera br...@click.com.py wrote:

Hi to all

I use directsync in my VMs with DRBD 8.4.5 in four nodes (LVM on top of 
DRBD), since some months ago, never did have problems (all sunday days, a 
automated system verify all storages DRBD),


These are the version of packages of my PVE nodes:

In a pair of nodes:
Shell# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

In other pair of nodes:
Shell# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-5 -particularly made by Alexandre
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-2 -particularly made by Alexandre
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Default cache mode for VM hard drives

2015-04-13 Thread Cesar Peschiera

Hi to all

I use directsync in my VMs with DRBD 8.4.5 in four nodes (LVM on top of 
DRBD), since some months ago, never did have problems (all sunday days, a 
automated system verify all storages DRBD),


These are the version of packages of my PVE nodes:

In a pair of nodes:
Shell# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

In other pair of nodes:
Shell# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-5 -particularly made by Alexandre
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-2 -particularly made by Alexandre
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1




- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Stanislav German-Evtushenko ginerm...@gmail.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, April 13, 2015 5:16 AM
Subject: Re: [pve-devel] Default cache mode for VM hard drives



Hi,

Another difference is that cache=none|directsync is that vm use aio=native 
instead aio=threads.



(you can try cache=none,aio=threads in you disk config to change the 
behaviour).


Maybe it doesn't work well with drbd.


- Mail original -
De: Stanislav German-Evtushenko ginerm...@gmail.com
À: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com, aderumier 
aderum...@odiso.com

Envoyé: Lundi 13 Avril 2015 10:23:19
Objet: Re: [pve-devel] Default cache mode for VM hard drives

Hello,

I have an update on this issue.

I have found circumstances when we get out of sync with directsync mode 
and made more test. What I found out so far was that out of sync only 
appear for cache modes with O_DIRECT, i.e. for those cache modes when host 
cache is bypassed. Write-back and write-through modes do not produce out 
of sync blocks with DRBD while none and directsync do.


Best regards,
Stanislav

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] got stuck while setup new dev custer

2015-03-23 Thread Cesar Peschiera

Hi Stefan

I had tested in two brands of switches that if i change the jumbo frames 
configuration to the max, the switches also accept MTUs of minor values by 
part of the servers, so i have as custom always configure the switches at 
max value (one less worry for me).


Also, with that configuration in the switch, if i have a mixed combination 
of MTU in the servers, all my servers still operating perfectly.


- Original Message - 
From: Stefan Priebe s.pri...@profihost.ag

To: pve-devel@pve.proxmox.com
Sent: Monday, March 23, 2015 6:05 PM
Subject: Re: [pve-devel] got stuck while setup new dev custer



solved. Ugly switch had a special parameter for jumbo frames *gr*

Stefan

Am 23.03.2015 um 22:25 schrieb Stefan Priebe:

Also tried:
transport=udpu

But it doesn't change anything ;-( same problem. 2nd node does not join
first node already running vms.

Stefan

Am 23.03.2015 um 20:01 schrieb Stefan Priebe:

Hi,

i wanted to setup a new proxmox dev cluster of 3 nodes. I already had a
single pve machine i want to extend.

So i used that one as a base.

# pvecm create pve-dev

Restarting pve cluster filesystem: pve-cluster[dcdb] notice: wrote new
cluster config '/etc/cluster/cluster.conf'
.
Starting cluster:
Checking if cluster has been disabled at boot... [  OK  ]
Checking Network Manager... [  OK  ]
Global setup... [  OK  ]
Loading kernel modules... [  OK  ]
Mounting configfs... [  OK  ]
Starting cman... [  OK  ]
Waiting for quorum... [  OK  ]
Starting fenced... [  OK  ]
Starting dlm_controld... [  OK  ]
Tuning DLM kernel config... [  OK  ]
Unfencing self... [  OK  ]

# pvecm status; pvecm nodes
Version: 6.2.0
Config Version: 1
Cluster Name:  pve-dev
Cluster Id: 51583
Cluster Member: Yes
Cluster Generation: 236
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: node1
Node ID: 1
Multicast addresses: 239.192.201.73
Node addresses: 10.255.0.10
Node  Sts   Inc   Joined   Name
1   M236   2015-03-23 19:48:20  node1

I then tried to add the 2nd node which just hangs:

# pvecm add 10.255.0.10
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [  OK  ]
Checking Network Manager... [  OK  ]
Global setup... [  OK  ]
Loading kernel modules... [  OK  ]
Mounting configfs... [  OK  ]
Starting cman... [  OK  ]
Waiting for quorum... [  OK  ]
Starting fenced... [  OK  ]
Starting dlm_controld... [  OK  ]
Tuning DLM kernel config... [  OK  ]
Unfencing self... [  OK  ]
waiting for quorum...

That one hangs at quorum.

And the first one shows in log:
Mar 23 19:56:41 node1 pmxcfs[7740]: [status] notice: cpg_send_message
retried 100 times
Mar 23 19:56:41 node1 pmxcfs[7740]: [status] crit: cpg_send_message
failed: 6
Mar 23 19:56:42 node1 pmxcfs[7740]: [status] notice: cpg_send_message
retry 10
Mar 23 19:56:43 node1 pmxcfs[7740]: [status] notice: cpg_send_message
retry 20
...

I already checked omping which is fine.

Whats wrong ;-(

Greets,
Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] DRBD9 test packages for jessie

2015-03-20 Thread Cesar Peschiera
I agree with Daniel, his idea is better!!!. (maybe in a phase initial can be 
this feature as something pending)

And about of the verification of DRBD storages (if is that the plugin will have 
it), do with the best hash algorithm,  that i believe that is sha1 (that always 
are the that come supported in the kernels of linux).
  - Original Message - 
  From: Daniel Hunsaker 
  To: Cesar Peschiera ; Dietmar Maurer ; PVE Development List 
  Sent: Friday, March 20, 2015 4:34 PM
  Subject: Re: [pve-devel] DRBD9 test packages for jessie


  I'd rather it run the resync before continuing the migration, than simply 
abort.  It'll take a bit longer to migrate, but I can't think of any reason it 
would make sense *not* to run the resync at that point and simply keep going 
once that's done.



  On Fri, Mar 20, 2015, 13:30 Cesar Peschiera br...@click.com.py wrote:

It is fantastic !!!

Talking about of DRBD, for now, and if is possibe, i would like to order a
features:

1- While a VM is running with LVM on top of DRBD, the replicate storages be
in secondary mode, ie, only to have a primary storage for each VM that is
running (that always was the recommendation of LINBIT for security reasons).

2- And when i apply a live migration of a VM, the DRBD plugin first see if
the replicated storage is perfectly synchronized (in terms of drbd oos
that mean: out of sync), and if really are perfectly synchronized, the
plugin converts the secondary storage in primary for after apply the live
migration, and after of a live migration successfully, finally convert the
old primary storage in secondary. Of this mode, always we have a primary
storage in execution for each VM.

3- But if we have the case of oos, the plugin don't accept the live
migration, and a error message appear in the screen.

About of the verification of replicated storages in DRBD:
4- As DRBD has a command that do a verification of data and metadata of the
replicated storages, will be fantastic have it  in the schedule of the
PVE-GUI.

About of do a resynchronization if a storage is in oos:
5- As DRBD has a set of commands for do a resynchronization (applied only to
the blocks of disk that are different and not for the complet storage), also
will be good have it in the PVE-GUI.


- Original Message -
From: Dietmar Maurer diet...@proxmox.com
To: Cesar Peschiera br...@click.com.py; PVE Development List
pve-devel@pve.proxmox.com
Sent: Friday, March 20, 2015 2:00 PM
Subject: Re: [pve-devel] DRBD9 test packages for jessie


I just want to note that the new DRBD9 has some cool feature

 - support more that 2 nodes
 - support snapshots
 - as fast as DRBD 8.X

 I am now writing a storage plugin, so that we can do further tests...

 Oh, OK, sorry

  DRBD9 isn't ready for his use in a production environment, now is in
  pre-release phase...
  http://www.linbit.com/en/component/phocadownload/category/5-drbd
 
  Respectfully I mean that I believe that will be good wait to that
  DRBD9
  be
  in a stable version before of release it in the pve repositories.
 
  That is why I compiled it for jessie, and uploaded to the jessie test
  repository.
 




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] DRBD9 test packages for jessie

2015-03-20 Thread Cesar Peschiera

I'd rather it run the resync before continuing the migration, than simply
abort.  It'll take a bit longer to migrate, but I can't think of any reason
it would make sense *not* to run the resync at that point and simply keep
going once that's done.


Thinking again, maybe will be good that the user to know that DRBD has
storages with oos through a message, then, we can analyze the origin of
the problem and correct it before of apply the live migration.

Maybe will be better that live migration don't do previously a
resynchronization and show the error message in the screen.
To be honest, I don't know that will be better, and for be sure, i always
prefer to do all manually (some check ups, etc.) 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] vhost-user snabbswitch

2015-03-12 Thread Cesar Peschiera

200 Ggps, with only a single core... WWWoooww 
Alexander, you're the best!, this will be wonderful for my MS-SQL Server 
virtualized (and obviously for other VMs also).


Also will be good for VMs with VoIP (Asterix, etc).

If it will be stable with PVE, and without conflicts with DRBD in bonding 
round-robin mode, support live migration and HA, i want it!!!


Note: In the past, i had problems with OVS and DRBD, (OVS with 2 NICs, and 
DRBD in bonding round-robin with other 2 NICs using the linux stack), so i
had that remove OVS and use the linux stack solely, moreover I did not do 
exhaustive tests to find the source of the problem, but i think that the 
problem was in the pve-kernel.


+1 vote if will work well with DRBD in bonding round-robin mode with the 
linux stack, always using DRBD with his NICs exclusives, support live 
migration and HA


Best regards
Cesar


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, March 12, 2015 9:57 AM
Subject: [pve-devel] vhost-user  snabbswitch



Hi,

anybody interested by vhost-user  snabbswitch integration for proxmox
4.0 ?

http://snabb.co/


It's an userland switch (no tap devices, no linux bridges/ovs), which
announce
same performance than srv-io passthrough. ( 200 Gbps on a single core!)

I'm looking for this, because next year I'll have need for big videos
bandwidth virtual machines.

snabbswitch will be integrated in openstack, so it'll be supported on the
long time.

All features like like vlans,firewall,vpn are implement in snabbswitch
with lua language.

(firewall seem interesting (but currentle pretty simple), because we can
have 1conntrack by vm.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Optimizing CPU flags

2015-02-17 Thread Cesar Peschiera

Only as a idea, talking about of the kind of processor and the live
migration for get the best performance:

Maybe will be better that PVE analyze the CPU flags of each node in the
cluster, then, it will create a personalized and optimum kind of CPU with
CPU flags that the nodes have in commun, it will be very util when you want
apply live migration.

The advantages:
1) We get the best performance to processor flags level.
2) Don't matter the kind of CPU that you have, nor how many new flags have
the new processor purchased, always PVE will do the comparison and will
enabled for the VMs.
2) Live migration will work.

But the disadvantage of this strategy is: When you add PVE nodes and you
have VMs running (that if the VMs are not running, also will be very util).

Maybe this strategy can be applied correctly, but it will be in pending
changes for apply.

I think that will be the better strategy, due that don't matter the kind of
CPU that you have. Moreover, for long time i was waiting a feature of this
kind.

Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, February 17, 2015 9:31 AM
Subject: Re: [pve-devel] Optimizing CPU flags



A fast workaround/hack would be to add such setting to datacenter.cfg


I think also it could be great to be able to define default create values
in datacenter.cfg

(disk (virtio,scsi,...),nic (virtio,e1000,...), scsi controller
(lsi,virtio-scsi), ...)


- Mail original -
De: dietmar diet...@proxmox.com
À: Stefan Priebe s.pri...@profihost.ag, pve-devel
pve-devel@pve.proxmox.com
Envoyé: Mercredi 11 Février 2015 19:42:11
Objet: Re: [pve-devel] Optimizing CPU flags


 Sure but you cannot select a cpu type as a global default in a cluster.
 So you have to remember that one each time.

The suggestion was to implement that:
https://git.proxmox.com/?p=qemu-defaults.git;a=summary


A fast workaround/hack would be to add such setting to datacenter.cfg

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : cpu hotplug rework

2015-01-22 Thread Cesar Peschiera

Hi

I think that if such configuration don't more slow the performance, it can 
be enabled by default.


- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, January 22, 2015 3:31 PM
Subject: Re: [pve-devel] qemu-server : cpu hotplug rework



  maybe do you don't have define vcpus ?

 vcpus is not set when you create a VM using the GUI.

 This leads to the situation that CPU hotplug is disabled by default.

 Any ideas how to improve that situation?

Can we simply use topology from host on create?

# qm create 100 --vcpus 3

Above command use socket/cores from host, so the resulting config is

vcpus: 3
sockets: 2
cores: 4

What do you think?


Or should we simply assume 'vcpus' is the switch to enable hotplug?
We could do the same for memory hotplug, and disable it if 'dimm_memory' 
is

not defined inside config?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] WARNING: command 'df -P -B 1 /mnt/pve/NFS-Disco2' failed: got timeout

2015-01-08 Thread Cesar Peschiera

Hi PVE developers team.

I get always, and in all nodes, this message when a vzdump backup is in
progress:
WARNING: command 'df -P -B 1 /mnt/pve/NFS-Disco2' failed: got timeout
(where the number of disk varies in each PVE server).

The message appears in PVE 2.3 and 3.3 versions (that it are the versions
that i have installed).

In Software, the NFS Server has:
- Installed PVE 3.3 (from his iso file), and kernel 3.10.
- This NFS-Server configuration:
/mnt/disco1
10.100.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash)
/mnt/disco2
10.100.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash)
/mnt/disco3
10.100.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash)
/mnt/disco4
10.100.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash)
(where the number of disk varies for the use of each PVE server).

In Hardware, the NFS Server has:
- A Adaptec 6805E RAID card without write-cache enabled (and without AFM-600
Flash module)
- Several RAID 1 configured with SATA disks, and each RAID1 is exclusively
to store the backups of a PVE node.
- The NIC for the backups is Intel 10 Gb/s. 2 ports with bonding LACP.
- The LAN of backups is a network independent.

In the PVE nodes (where are the VMs running):
- I have configured the access to the NFS server by the GUI of PVE.

My Questions:
Is possible correct the code to fix this?, or
Am i doing something wrong?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add numa options

2015-01-07 Thread Cesar Peschiera

I have changed the vm config file according your suggestions after of apply
your patches:
numa0: memory=124416,policy=bind
numa1: memory=124416,policy=bind

And always with:
...
sockets: 2
cores: 20
cpu: host
hotplug: 1
memory: 248832
tablet: 0
...

Anyway, the change was for much better. ... :-)
...Many thanks for the patches !!!

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, January 07, 2015 4:59 AM
Subject: Re: [pve-devel] [PATCH] add numa options



Many thanks for your reply, and please, let me to do a question:
Then, why I have a great difference of performance between the before and
the after of install your patches if i never touched the kernel nor his
configuration? (the patches installed are: pve-qemu-kvm_2.2-2_amd64.deb
and
qemu-server_3.3-5_amd64.deb)


Maybe qemu 2.2 optimisations ? If you don't have change vm config file, this
can be the only explain.


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: dietmar diet...@proxmox.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 6 Janvier 2015 23:53:28
Objet: Re: [pve-devel] [PATCH] add numa options


But that don't mean that physically, the virtual cpus|numa nodes will be
mapped to correct physical cpus.


Many thanks for your reply, and please, let me to do a question:
Then, why I have a great difference of performance between the before and
the after of install your patches if i never touched the kernel nor his
configuration? (the patches installed are: pve-qemu-kvm_2.2-2_amd64.deb and
qemu-server_3.3-5_amd64.deb)

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Cesar Peschiera

(Note that I don't see how mssql can pin vcpus on real host cpu).

Being mssql into the VM, the DBA showed me as mssql can see the numa nodes,
and mssql has his own form of manage his own processes between the numa
nodes for get a better performance. It is for it that i think will be better
that in the PVE GUI we have the option of enable or disable the cpu pinning
for each VM, and obviously i would like to do some tests for compare which
of the two options is better.


host kernel 3.10 autonuma is doing autopinning, so you can try to disable
it.

If the autonuma isn't customizable for each VM, i guess that will be
better leave it as is, but i am not sure, due that we will have two systems
doing the auto balance: the 3.10 Kernel and the mssql into the VM ???

Maybe will be better do a more test, disabling autonuma in the 3.10 kernel.
Question:
How can i disable autonuma in the /etc/default/grub file?

Note:
The test that i did in the past, was with and without your patches, always
with the 3.10 kernel (without do changes on his configuration), and with
your patches, the performance was very top (two a three times more quick in
several tests, never minus of two, talking in terms of data base).


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, January 06, 2015 2:31 PM
Subject: Re: [pve-devel] [PATCH] add numa options



ase excuse me if i don't talk with property, i meant the cpu pinning that
will have pve-manager and QEMU in the next release. Ie, that i would like 
to

have the option of enable or disable in PVE GUI the cpu pinning that QEMU
can apply for each VM, if so, i will can to choose if i want that QEMU or
the application inside of the VM managed the cpu pinning with the numa
nodes. And the DBA says that the MS-SQL Server will manage better the cpu
pinning that QEMU, and i would like to do some tests for confirm it.


Oh,ok.

so numa:1 should do the trick, it's create numa nodes but don't pin cpu.

(Note that I don't see how mssql can pin vcpus on real host cpu).

host kernel 3.10 autonuma is doing autopinning,so you can try to disable it.



About qemu-server pve-no-subscription, I don't known if Dietmar plan to 
release it until next proxmox release.

Because big changes are coming in this package this week.



- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: dietmar diet...@proxmox.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 6 Janvier 2015 17:33:42
Objet: Re: [pve-devel] [PATCH] add numa options

Hi Alexandre

Please excuse me if i don't talk with property, i meant the cpu pinning that
will have pve-manager and QEMU in the next release. Ie, that i would like to
have the option of enable or disable in PVE GUI the cpu pinning that QEMU
can apply for each VM, if so, i will can to choose if i want that QEMU or
the application inside of the VM managed the cpu pinning with the numa
nodes. And the DBA says that the MS-SQL Server will manage better the cpu
pinning that QEMU, and i would like to do some tests for confirm it.

Moreover, as i have 2 servers identical in Hardware, where is running this
unique VM, i would like also to have the option of live migration enabled.


I'm interested to see results between both method

With pleasure i will report the results

Moreover, talking about of the download of qemu-server deb from git, as very
soon this server will be in production, i would like to wait that this
package is in the pve-no-subscription repository for apply a upgrade, that
being well, I will run less risks of down times, unless you tell me you have
already tested and is very stable.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, January 06, 2015 5:02 AM
Subject: Re: [pve-devel] [PATCH] add numa options


Hi,


As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive
for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can
manage
his own numa-processes better than QEMU, and as i guess that also will
exist
many applications that will manage his own numa-processes better than
QEMU,
is that i would like to order that PVE GUI has a option of enable or
disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.


I'm not sure to understand what do you mean by
says that MS-SQL Server can manage his own numa-processes better than
QEMU,


Numa are not process, it's an architecture to regroup cpus with memory
bank,for fast memory access.


They are 2 parts:

1)currently, qemu expose the virtual numa nodes to the guest.
(each numa node = X cores with X memory)

This can be simply enabled with numa:1 with last patches,
(I'll create 1 numa node by virtual

Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Cesar Peschiera

But that don't mean that physically, the virtual cpus|numa nodes will be
mapped to correct physical cpus.


Many thanks for your reply, and please, let me to do a question:
Then, why I have a great difference of performance between the before and
the after of install your patches if i never touched the kernel nor his
configuration? (the patches installed are: pve-qemu-kvm_2.2-2_amd64.deb and 
qemu-server_3.3-5_amd64.deb)


Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add numa options

2015-01-06 Thread Cesar Peschiera

Hi Alexandre

Please excuse me if i don't talk with property, i meant the cpu pinning that
will have pve-manager and QEMU in the next release. Ie, that i would like to
have the option of enable or disable in PVE GUI the cpu pinning that QEMU
can apply for each VM, if so, i will can to choose if i want that QEMU or
the application inside of the VM managed the cpu pinning with the numa
nodes. And the DBA says that the MS-SQL Server will manage better the cpu
pinning that QEMU, and i would like to do some tests for confirm it.

Moreover, as i have 2 servers identical in Hardware, where is running this 
unique VM, i would like also to have the option of live migration enabled.



I'm interested to see results between both method

With pleasure i will report the results

Moreover, talking about of the download of qemu-server deb from git, as very
soon this server will be in production, i would like to wait that this
package is in the pve-no-subscription repository for apply a upgrade, that
being well, I will run less risks of down times, unless you tell me you have
already tested and is very stable.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, January 06, 2015 5:02 AM
Subject: Re: [pve-devel] [PATCH] add numa options


Hi,


As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive
for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can
manage
his own numa-processes better than QEMU, and as i guess that also will
exist
many applications that will manage his own numa-processes better than
QEMU,
is that i would like to order that PVE GUI has a option of enable or
disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.


I'm not sure to understand what do you mean by
says that MS-SQL Server can manage his own numa-processes better than
QEMU,


Numa are not process, it's an architecture to regroup cpus with memory
bank,for fast memory access.


They are 2 parts:

1)currently, qemu expose the virtual numa nodes to the guest.
(each numa node = X cores  with X memory)

This can be simply enabled with numa:1  with last patches,
(I'll create 1 numa node by virtual socket, and split the ram amount between
each node


or if you want to custom memory access, cores by nodes,or setup specific
virtual numa nodes to specific host numa nodes
you can do it with
numa0: ,
numa1:
cpus=id[-id],memory=mb[[,hostnodes=id[-id]][,policy=preferred|bind|interleave]]


But this is always the application inside the guest which manage the memory
access.


2) Now with kernel 3.10, we have also auto numabalancing at the host side.
I'll try to map if possible the virtual numa nodes to host numa node.

you can disable this feature with echo 0  /proc/sys/kernel/numa_balancing


So for my point of view, numa:1 + auto numa balancing should give you
already good results,
and it's allow live migration between different hosts numa architecture


Maybe with only 1vm, you can try to manually map virtual nodes to specific
nodes.

I'm interested to see results between both method (Maybe do you want last
qemu-server deb from git ?)



I plan to add gui for part1.




- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com, dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 6 Janvier 2015 06:35:15
Objet: Re: [pve-devel] [PATCH] add numa options

Hi Alexandre and developers team.

I would like to order a feature for the next release of pve-manager:

As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can manage
his own numa-processes better than QEMU, and as i guess that also will exist
many applications that will manage his own numa-processes better than QEMU,
is that i would like to order that PVE GUI has a option of enable or disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.

Moreover, if you can to add such feature, i will can to run a test with
MS-SQL Server for know which of the two options give me better results and
publish it (with the times of wait for each case)

@Alexandre:
Moreover, with your temporal patches for manage the numa-processes, in
MS-SQL Server i saw a difference of time between two to three times more
quick for get the results (that it is fantastic, a great difference), but as
i yet don't finish of do the tests (talking about of do some changes in the
Bios Hardware, HugePages managed for the Windows Server, etc), is that yet i
don't publish a resume very detailed of the tests. I guess that soon i will
do it (I depend on third parties, and the PVE host not must lose the cluster
communication).

And talking about of lose the cluster communication, from that i have I/OAT
DMA

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-05 Thread Cesar Peschiera

Hi to all

Recently i have tested in the company that igmp is necessary for the VMs 
(tested with tcpdump), the company has Windows Servers as VMs and several 
Windows systems as workstations in the local network, so i can tell you that 
i need to have the protocol igmp enabled in some VMs for that the Windows 
systems in the company work perfectly.


And about of your suggestion:
-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type 
MULTICAST -m udp --dport 5404:5405 -j RETURN


I would like to do some questions:
1) ¿Such rule will avoid the cluster communication to the VMs?
2) ¿Such rule will not prejudice the normal use of the igmp protocol own of 
the Windows systems in the VMs?
3) If both answers are correct, where i should put the rule that you 
suggest?



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, January 05, 2015 6:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off




Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP


Just be carefull that it'll block all igmp, so if you need multicast 
inside your vms,

I'll block it too.

Currently, we have a default rule for IN|OUT for host communication

-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type 
MULTICAST -m udp --dport 5404:5405 -j RETURN

to open multicast between nodes.

Bit indeed, currently, in proxmox firewall, we can't define global rule in 
FORWARD.





@Dietmar: maybe can we add a default drop rule in -A PVEFW-FORWARD, to 
drop multicast traffic from host ?


Or maybe better, allow to create rules at datacenter level, and put them 
in -A PVEFW-FORWARD  ?




- Mail original -
De: datanom.net m...@datanom.net
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 4 Janvier 2015 03:34:57
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off


On Sat, 3 Jan 2015 21:32:54 -0300
Cesar Peschiera br...@click.com.py wrote:



Now in the switch i have igmp snooping disabled, but i want to avoid
flooding the entire VLAN and the VMs


Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP

PS. Your SPF for click.com.py is configured wrong:
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false failures
(mechanism '~all' matched)) receiver=mail1.copaco.com.py;
identity=mailfrom; envelope-from=br...@click.com.py; helo=gerencia;
client-ip=190.23.61.163
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false failures
(mechanism '~all' matched)) receiver=mail1.copaco.com.py;
identity=mailfrom; envelope-from=br...@click.com.py; helo=gerencia;
client-ip=190.23.61.163
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false failures
(mechanism '~all' matched)) receiver=mail1.copaco.com.py;
identity=mailfrom; envelope-from=br...@click.com.py; helo=gerencia;
client-ip=190.23.61.163
--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
-- 
/usr/games/fortune -es says:

Why does a hearse horse snicker, hauling a lawyer away?
-- Carl Sandburg

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-05 Thread Cesar Peschiera

Many thanks Alexandre !!!, it is the rule that i was searching long time
ago, i will add to the rc.local file.

Moreover and if you can, as i need permit multicast in some Windows servers
VMs, workstations in the local network, and PVE nodes,  can you show me the
configuration of your switch managed in terms of igmp snooping and querier?
(the switches managed Dell has configurations very similar to Cisco). It is
due that i don't have practice for do this exercise and i need a model as
point of start.

I guess that with my manuals of Dell and seeing your configuration, i will
can do it well.

Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: dietmar diet...@proxmox.com; pve-devel pve-devel@pve.proxmox.com
Sent: Tuesday, January 06, 2015 12:37 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



And about of your suggestion:
-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type
MULTICAST -m udp --dport 5404:5405 -j RETURN


Note that this is the rules for HOST-IN,
if you want to adapt you can do

-A FORWARD -s yournetwork/24 -p udp -m addrtype --dst-type MULTICAST -m
udp --dport 5404:5405 -j DROP


1) ¿Such rule will avoid the cluster communication to the VMs?
2) ¿Such rule will not prejudice the normal use of the igmp protocol own
of

the Windows systems in the VMs?

this will block multicast traffic, on udp port 5404:5405 (corosync default
port), from your source network.


3) If both answers are correct, where i should put the rule that you

suggest?

Currently it's not possible to do it with proxmox firewall,
but you can add it in rc.local for example.

iptables -A FORWARD -s yournetwork/24 -p udp -m addrtype --dst-type
MULTICAST -m udp --dport 5404:5405 -j DROP

Proxmox firewall don't override custom rules


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com, dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 6 Janvier 2015 00:09:17
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off

Hi to all

Recently i have tested in the company that igmp is necessary for the VMs
(tested with tcpdump), the company has Windows Servers as VMs and several
Windows systems as workstations in the local network, so i can tell you that
i need to have the protocol igmp enabled in some VMs for that the Windows
systems in the company work perfectly.

And about of your suggestion:
-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type
MULTICAST -m udp --dport 5404:5405 -j RETURN

I would like to do some questions:
1) ¿Such rule will avoid the cluster communication to the VMs?
2) ¿Such rule will not prejudice the normal use of the igmp protocol own of
the Windows systems in the VMs?
3) If both answers are correct, where i should put the rule that you
suggest?


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, January 05, 2015 6:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP


Just be carefull that it'll block all igmp, so if you need multicast
inside your vms,
I'll block it too.

Currently, we have a default rule for IN|OUT for host communication

-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type
MULTICAST -m udp --dport 5404:5405 -j RETURN
to open multicast between nodes.

Bit indeed, currently, in proxmox firewall, we can't define global rule in
FORWARD.




@Dietmar: maybe can we add a default drop rule in -A PVEFW-FORWARD, to
drop multicast traffic from host ?

Or maybe better, allow to create rules at datacenter level, and put them
in -A PVEFW-FORWARD ?



- Mail original - 
De: datanom.net m...@datanom.net

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 4 Janvier 2015 03:34:57
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off

On Sat, 3 Jan 2015 21:32:54 -0300
Cesar Peschiera br...@click.com.py wrote:



Now in the switch i have igmp snooping disabled, but i want to avoid
flooding the entire VLAN and the VMs


Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP

PS. Your SPF for click.com.py is configured wrong:
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false

Re: [pve-devel] [PATCH] add numa options

2015-01-05 Thread Cesar Peschiera

Hi Alexandre and developers team.

I would like to order a feature for the next release of pve-manager:

As i have running a VM with MS-SQL Server (and with 246 GB RAM exclusive for
MS-SQL Server), the DBA of MS-SQL Server says that MS-SQL Server can manage
his own numa-processes better than QEMU, and as i guess that also will exist
many applications that will manage his own numa-processes better than QEMU,
is that i would like to order that PVE GUI has a option of enable or disable
the automatic administration of the numa-processes, also with the
possibility of do live migration.

Moreover, if you can to add such feature, i will can to run a test with
MS-SQL Server for know which of the two options give me better results and
publish it (with the times of wait for each case)

@Alexandre:
Moreover, with your temporal patches for manage the numa-processes, in
MS-SQL Server i saw a difference of time between two to three times more
quick for get the results (that it is fantastic, a great difference), but as
i yet don't finish of do the tests (talking about of do some changes in the
Bios Hardware, HugePages managed for the Windows Server, etc), is that yet i
don't publish a resume very detailed of the tests. I guess that soon i will
do it (I depend on third parties, and the PVE host not must lose the cluster
communication).

And talking about of lose the cluster communication, from that i have I/OAT 
DMA engine enabled in the Hardware Bios, the node never more lost the 
cluster communication, but i must do some extensive testing to confirm it.


Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Dietmar Maurer diet...@proxmox.com
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 8:17 PM
Subject: Re: [pve-devel] [PATCH] add numa options



Ok,

Finally I found the last pieces of the puzzle:

to have autonuma balancing, we just need:

2sockes-2cores-2gb ram

-object memory-backend-ram,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0
-object memory-backend-ram,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1

Like this, the host kernel will try to balance the numa node.
This command line works if the host don't support numa.



now if we want to bind guest numa node to specific host numa node,

-object
memory-backend-ram,size=1024M,id=ram-node0,host-nodes=0,policy=preferred
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0
-object
memory-backend-ram,size=1024M,id=ram-node1,host-nodes=1,policy=bind \
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1

This require that host-nodes=X exist on the physical host
and need also the qemu-kvm --enable-numa flag



So,
I think we could add:

numa:0|1.

which generate the first config, create 1numa node by socket, and share
the ram across the the nodes



and also,for advanced users which need manual pinning:


numa0:cpus=X-X,memory=mb,hostnode=X-X,policy=bind|preferred|)
numa1:...



what do you think about it ?




BTW, about pc-dimm hotplug, it's possible to add nume nodeid in
device_add pc-dimm,node=X


- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Dietmar Maurer diet...@proxmox.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 20:25:51
Objet: Re: [pve-devel] [PATCH] add numa options


shared? That looks strange to me.

I mean split across the both nodes.


I have check a little libvirt,
and I'm not sure, but I think that memory-backend-ram is optionnal, to
have autonuma.

It's more about cpu pinning/memory pinning on selected host node

Here an example for libvirt:
http://www.redhat.com/archives/libvir-list/2014-July/msg00715.html
qemu: pass numa node binding preferences to qemu

+-object
memory-backend-ram,size=20M,id=ram-node0,host-nodes=3,policy=preferred \
+-numa node,nodeid=0,cpus=0,memdev=ram-node0 \
+-object
memory-backend-ram,size=645M,id=ram-node1,host-nodes=0-7,policy=bind \
+-numa node,nodeid=1,cpus=1-27,cpus=29,memdev=ram-node1 \
+-object memory-backend-ram,size=23440M,id=ram-node2,\
+host-nodes=1-2,host-nodes=5,host-nodes=7,policy=bind \
+-numa node,nodeid=2,cpus=28,cpus=30-31,memdev=ram-node2 \

- Mail original - 


De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 19:42:45
Objet: RE: [pve-devel] [PATCH] add numa options


When do memory hotplug, if there is numa node, we should add the memory
size to the corresponding node memory size.

For now, it mainly affects the result of hmp command info numa.


So, it's seem to be done automaticaly.
Not sure on which node is assigne the pc-dimm, but maybe the free slots
are
shared at start between the numa nodes.


shared? That looks strange to me.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-03 Thread Cesar Peschiera

Thanks Michael for your reply

And what about of the tag firewall in the PVE GUI:
- For the Datacenter.
- For each PVE node.
- For the network device of the VM.

In general lines, i want to have all network traffic enabled (In/Out), and
only cut the traffic that i want cut, that in this case will be the igmp for
the VMs. So i guess that i need to have the PVE GUI of this mode:

- Firewall tag in Datacenter:
Enable Firewall: yes
Input policy: accept
Output policy: accept

- Firewall tag in PVE nodes:
Enable Firewall: yes

Or without import as is this configured (both- datacenter and PVE nodes),
will work well the rule that
you suggest me?

And the rule that you suggest me, where will be better put it?:
1) In the rc.local file (I don't like put it here)
2) In the PVE GUI (i believe that will be the best site), but i don't know 
how add it, and guess that after, i will have that enable the firewall in 
the network device of the VM (also in PVE GUI).


- Original Message - 
From: Michael Rasmussen m...@datanom.net

To: pve-devel pve-devel@pve.proxmox.com
Sent: Saturday, January 03, 2015 11:34 PM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



Now in the switch i have igmp snooping disabled, but i want to avoid
flooding the entire VLAN and the VMs


Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-02 Thread Cesar Peschiera

Hi Alexandre

Many thanks for your reply, which is much appreciated.

Unfortunately, your suggestion does not work for me, so i will comment the
results.

Between some comments, also in this message i have 7 questions for you, and
i'll be very grateful if you can answer me.

Only for that be clear about of the version of the programs that i have
installed in the nodes that has a behaviour strange (2 of 6 PVE nodes):
shell pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-5 --especial patch created by Alexandre for me
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-2 --especial patch created by Alexandre for me
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

After a minute of apply on only a node (pve6), these commands, i lost the
quorum in two nodes (pve5 and pve6):
The commands executed on only a node (pve6):
echo 1  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
echo 0  /sys/class/net/vmbr0/bridge/multicast_querier

The error message in the node where i applied the commands (pve6) is this:
Message from syslogd@pve6 at Jan  2 20:58:32 ...
rgmanager[4912]: #1: Quorum Dissolved

And as collateral effect, as the pve5 node is configured with HA for a VM
with a failover domain between pve5 and pve6 (the nodes), also pve5 has
loss the quorum and the VM that is in HA turns off brutally.

These are the error messages in the screen of the pve5 node:
[61.246002] dlm: rgmanager: send_repeat_remove dir 6 rg=pvevm:112
[119373.380111] dlm: closing connection to node 1
[119373:300150] dlm: closing connection to node 2
[119373:380182] dlm: closing connection to node 3
[119373:300205] dlm: closing connection to node 4
[119373:380229] dlm: closing connection to node 6
[119373:300268] dlm: closing connection to node 7
[119373:380319] dlm: closing connection to node 8
[119545:042242] dlm: closing connection to node 3
[119545:042264] dlm: closing connection to node 8
[119545:042281] dlm: closing connection to node 7
[119545:042300] dlm: closing connection to node 2
[119545:042316] dlm: closing connection to node 1
[119545:042331] dlm: closing connection to node 4
[119545:042347] dlm: closing connection to node 5
[119545:042891] dlm: dlm user daemon left 1 lockspaces

So i believe that pve has a bug and a great problem, but i am not sure of
that, but i know that if the pve6 node for some reason turns off brutally,
the pve5 node will lose quorum and his VM in HA also will turn off, and this
behaviour will give me several problems due that actually i don't know what
i must do for start the VM in the node that is alive?

So my questions are:
1) Why the pve5 node lost the quorum if i don't applied any change in this
node?
(this node always had the multicast snooping filter disabled)
2) Why the VM that is running on pve5 node and also is configured in HA
turns off brutally?
3) If it is a bug, can someone apply a patch to code?

Moreover, talking about of firewall enabled for the VMs:
I remember that +/- 1 month ago, i tried apply to the firewall a rule
restrictive of access of the IP address of cluster communication to the VMs
without successful, ie, with a policy of firewall by default of allow,
each time that i enable this unique and restrictive rule to the VM, the VM
lose all network communication. Maybe i am wrong in something.

So i would like to ask you somethings:

4) Can you do a test, and then tell me the results?
5) If the results are positives, can you tell me how do it?
6) And if the results are negatives, can you apply a patch to code?

Moreover, the last question:
7) As each PVE node has his firewall tag in the PVE GUI, i guess that such
option is for apply firewall rules of in/out that affect only to this node,
right?, or for what exist such option?



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, January 02, 2015 5:40 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


Hi,


But as i need that the VMs and the PVE host can be accessed from any
workstation, the vlan option isn't a option useful for me.

Ok



And about of cluster communication and the VMs, as i don't want that the
multicast packages go to the VMs, i believe that i can cut it for the VMs
of
two modes:

a) Removing the option post-up echo 0 
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping  to my NIC
configuration of the PVE host

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-01 Thread Cesar Peschiera

Hi Alexandre.

Thanks for your reply.

But as i need that the VMs and the PVE host can be accessed from any 
workstation, the vlan option isn't a option useful for me.


Anyway, i am testing with I/OAT DMA Engine enabled in the Bios Hardware, 
that after some days with few activity, the CMAN cluster is stable, soon i 
will prove with a lot of network activity .


And about of cluster communication and the VMs, as i don't want that the 
multicast packages go to the VMs, i believe that i can cut it for the VMs of 
two modes:


a) Removing the option post-up echo 0  
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping  to my NIC 
configuration of the PVE host if i will have a behaviour stable.


b) By firewall will be very easy, since that i know the IP address of origin 
of cluster communication, but unfortunately the wiki of PVE don't show 
clearly how can i apply it, ie, i see the firewall tag in datacenter, PVE 
hosts and in the network configuration of the VMs, and the wiki don't says 
nothing about of this, for me, with a global configuration that affect to 
all VMs of the cluster will be wonderfull using IPset or some other way that 
be simple of apply.


Do you have some idea of how avoid that multicast packages go to the VMs in 
a stable mode? and how apply it?


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Wednesday, December 31, 2014 3:33 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off



Hi Cesar,

I think I totaly forgot that we can't add an ip on an interface slave of a 
bridge.


Myself I'm using a tagged vlan interface for the cluster communication

something like:

auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto bond0.100
iface bond0 inet static
address 192.100.100.50
netmask 255.255.255.0
gateway 192.100.100.4

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping

- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 31 Décembre 2014 05:01:37
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off


Hi Alexandre

Today, and after a week, again a node lost the cluster communication. So i
changed the configuration of the Bios Hardware to I/OAT DMA enabled (that
work very well in others nodes Dell R320 with NICs of 1 Gb/s).

Moreover, trying to follow your advice of to put 192.100.100.51 ip address
directly to bond0 and not in vmbr0, when i reboot the node, it is totally
isolated, and i see a message that says that vmbr0 missing a IP address.
Also the node is totally isolated when i apply this ip address to vmbr0:
0.0.0.0/255.255.255.255

In practical terms, can you tell me how can i add a IP address to bond0 and
also have a bridge for these same NICs?

- Now, this is my configuration:
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet static
address 192.100.100.50
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, December 19, 2014 7:59 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


maybe can you try to put 192.100.100.51 ip address directly to bond0,

to avoid corosync traffic going through to vmbr0.

(I remember some old offloading bugs with 10gbe nic and linux bridge)


- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 19 Décembre 2014 11:08:33
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


can you post your /etc/network/interfaces of theses 10gb/s nodes ?


This is my configuration:
Note: The LAN use 192.100.100.0/24

#Network interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
iface eth8 inet manual
iface eth9 inet manual
iface eth10 inet manual
iface eth11 inet manual

#PVE Cluster and VMs (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-30 Thread Cesar Peschiera

Hi Alexandre

Today, and after a week, again a node lost the cluster communication. So i
changed the configuration of the Bios Hardware to I/OAT DMA enabled (that
work very well in others nodes Dell R320 with NICs of 1 Gb/s).

Moreover, trying to follow your advice of to put 192.100.100.51 ip address
directly to bond0 and not in vmbr0, when i reboot the node, it is totally
isolated, and i see a message that says that vmbr0 missing a IP address.
Also the node is totally isolated when i apply this ip address to vmbr0:
0.0.0.0/255.255.255.255

In practical terms, can you tell me how can i add a IP address to bond0 and
also have a bridge for these same NICs?

- Now, this is my configuration:
auto bond0
iface bond0 inet manual
   slaves eth0 eth2
   bond_miimon 100
   bond_mode 802.3ad
   bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet static
   address  192.100.100.50
   netmask  255.255.255.0
   gateway  192.100.100.4
   bridge_ports bond0
   bridge_stp off
   bridge_fd 0
   post-up echo 0 
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, December 19, 2014 7:59 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


maybe can you try to put 192.100.100.51 ip address directly to bond0,

to avoid corosync traffic going through to vmbr0.

(I remember some old offloading bugs with 10gbe nic and linux bridge)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 19 Décembre 2014 11:08:33
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


can you post your /etc/network/interfaces of theses 10gb/s nodes ?


This is my configuration:
Note: The LAN use 192.100.100.0/24

#Network interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
iface eth8 inet manual
iface eth9 inet manual
iface eth10 inet manual
iface eth11 inet manual

#PVE Cluster and VMs (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

#A link for DRBD (NICs are of 10 Gb/s):
auto bond401
iface bond401 inet static
address 10.1.1.51
netmask 255.255.255.0
slaves eth1 eth3
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond402
iface bond402 inet static
address 10.2.2.51
netmask 255.255.255.0
slaves eth4 eth6
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond403
iface bond403 inet static
address 10.3.3.51
netmask 255.255.255.0
slaves eth5 eth7
bond_miimon 100
bond_mode balance-rr
mtu 9000

#A link for the NFS-Backups (NICs are of 1 Gb/s):
auto bond10
iface bond10 inet static
address 10.100.100.51
netmask 255.255.255.0
slaves eth8 eth10
bond_miimon 100
bond_mode balance-rr
#bond_mode active-backup
mtu 9000

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] need help to debug random host freeze on multiple hosts

2014-12-28 Thread Cesar Peschiera

Maybe i ask you a silly question, did you see the syslog and kern.log file?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: datanom.net m...@datanom.net
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, December 29, 2014 1:49 AM
Subject: Re: [pve-devel] need help to debug random host freeze on multiple 
hosts




Bad RAM stick?
Bad PSU?
Overheating of the CPU?


No errors reporting in dell Idrac.

(I have the problem on 6 differents nodes.)

I was also thinking of electrical problem, but voltages don't report any 
error.


Maybe the only difference is that I have more load currently on all my 
nodes because of Xmas period

(We host a lot of ecommerce websites)
I'm around 60-70% load on this quad opteron platforms.


I'll try to implement kdump today.



- Mail original -
De: datanom.net m...@datanom.net
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 28 Décembre 2014 19:02:04
Objet: Re: [pve-devel] need help to debug random host freeze on multiple 
hosts


On Sun, 28 Dec 2014 17:37:50 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:



I really don't known how to debug that, because the system freeze, and I 
don't have any kernel panic output in display or serial.



Can somebody help me to add something to have debug output ?


Bad RAM stick?
Bad PSU?

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
-- 
/usr/games/fortune -es says:

Bridge ahead. Pay troll.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] need help to debug random host freeze on multiple hosts

2014-12-28 Thread Cesar Peschiera

I know that this isn't a solution, but i will tell you only as a comment for
future decisions:

Long time ago, when i worked with Novell Netware, i had a problem of cache
in the AMD processor, so i had that disable it, and after, this server was
very slow, but was stable. Since that time i never recommended servers with
AMD processor.

Moreover, maybe will be good disable some flags to AMD processor and test 
it. How do it?, sincerely i don't know, but if you know it, please comment 
it here, as also your tests (if you can)


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: datanom.net m...@datanom.net; pve-devel pve-devel@pve.proxmox.com
Sent: Monday, December 29, 2014 3:31 AM
Subject: Re: [pve-devel] need help to debug random host freeze on multiple
hosts



Maybe i ask you a silly question, did you see the syslog and kern.log
file?


Yes sure , I have nothing in logs.
(That's why I thinked of kdump to try to have more info).

I'll really don't known if it's a software real kernel panic, or a hardware
bug.

I just see on vmware forum some amd microcode bug, and see that dell provide
a new bios update this month.
I'll try to update to see if it's help.



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: datanom.net m...@datanom.net
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, December 29, 2014 1:49 AM
Subject: Re: [pve-devel] need help to debug random host freeze on multiple
hosts



Bad RAM stick?
Bad PSU?
Overheating of the CPU?


No errors reporting in dell Idrac.

(I have the problem on 6 differents nodes.)

I was also thinking of electrical problem, but voltages don't report any
error.

Maybe the only difference is that I have more load currently on all my
nodes because of Xmas period
(We host a lot of ecommerce websites)
I'm around 60-70% load on this quad opteron platforms.


I'll try to implement kdump today.



- Mail original - 
De: datanom.net m...@datanom.net

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 28 Décembre 2014 19:02:04
Objet: Re: [pve-devel] need help to debug random host freeze on multiple
hosts

On Sun, 28 Dec 2014 17:37:50 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:



I really don't known how to debug that, because the system freeze, and I
don't have any kernel panic output in display or serial.


Can somebody help me to add something to have debug output ?


Bad RAM stick?
Bad PSU?

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
-- 
/usr/games/fortune -es says:

Bridge ahead. Pay troll.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] need help to debug random host freeze on multiple hosts

2014-12-28 Thread Cesar Peschiera

I know that this isn't a solution, but i will tell you only as a comment for
future decisions:

Long time ago, when i worked with Novell Netware, i had a problem of cache
in the AMD processor, so i had that disable it, and after, this server was
very slow, but was stable. Since that time i never recommended servers with
AMD processor.
(Maybe that you have the same problem?)

Moreover, maybe will be good disable some flags to AMD processor and test
it. How do it?, sincerely i don't know, but if you know it, please comment
it here, as also your tests (if you can)

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: datanom.net m...@datanom.net; pve-devel pve-devel@pve.proxmox.com
Sent: Monday, December 29, 2014 3:31 AM
Subject: Re: [pve-devel] need help to debug random host freeze on multiple
hosts



Maybe i ask you a silly question, did you see the syslog and kern.log
file?


Yes sure , I have nothing in logs.
(That's why I thinked of kdump to try to have more info).

I'll really don't known if it's a software real kernel panic, or a hardware
bug.

I just see on vmware forum some amd microcode bug, and see that dell provide
a new bios update this month.
I'll try to update to see if it's help.



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: datanom.net m...@datanom.net
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, December 29, 2014 1:49 AM
Subject: Re: [pve-devel] need help to debug random host freeze on multiple
hosts



Bad RAM stick?
Bad PSU?
Overheating of the CPU?


No errors reporting in dell Idrac.

(I have the problem on 6 differents nodes.)

I was also thinking of electrical problem, but voltages don't report any
error.

Maybe the only difference is that I have more load currently on all my
nodes because of Xmas period
(We host a lot of ecommerce websites)
I'm around 60-70% load on this quad opteron platforms.


I'll try to implement kdump today.



- Mail original - 
De: datanom.net m...@datanom.net

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Dimanche 28 Décembre 2014 19:02:04
Objet: Re: [pve-devel] need help to debug random host freeze on multiple
hosts

On Sun, 28 Dec 2014 17:37:50 +0100 (CET)
Alexandre DERUMIER aderum...@odiso.com wrote:



I really don't known how to debug that, because the system freeze, and I
don't have any kernel panic output in display or serial.


Can somebody help me to add something to have debug output ?


Bad RAM stick?
Bad PSU?

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
-- 
/usr/games/fortune -es says:

Bridge ahead. Pay troll.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Plans to Soft Fence

2014-12-26 Thread Cesar Peschiera

Also remember that fence_ack_manual is available, that it is a soft fence,
it don't need a hardware of fence device, and only works when we don't have
network communication with the node that have some kind of problem.

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Gilberto Nunes gilberto.nune...@gmail.com
Cc: pve-devel@pve.proxmox.com
Sent: Friday, December 26, 2014 3:33 PM
Subject: Re: [pve-devel] Plans to Soft Fence



fence_pve?? I can't found out such fence here... Perhaps I miss some
packet?
Where can I found it??


That is only in newest version from git. But again, this is not suitable
for
real fencing.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Plans to Soft Fence

2014-12-26 Thread Cesar Peschiera
@Gilberto:
First that all, please post the messages to the pve-devel mailing list, so all 
people can read and correct anything that to be wrong said.

Moreover, i do of this mode and order:

1) I wait that a node has some kind of problem
2) I analyze the problem (for example, maybe be a problem with some card)
3) If the problem is serious,  i manually disconnect  the electric energy of 
the node with problems
4) From some node that is alive, i execute manually:
/usr/sbin/fence_ack_manual ip address or name of the node


5) Enjoy of HA

Notes:
A) I prefer this mode because will be possible to analyze the problem patiently 
before of apply the manual fence. Also we must consider that if we don't have a 
fence device by Hardware, any other option is dangerous and the security of 
your information will be very committed.
B) Also is possible to apply a bash-script in the crontab for get a automatic 
mode, but i know that it is very dangerous, and i don't want have it.


  - Original Message - 
  From: Gilberto Nunes 
  To: Cesar Peschiera 
  Sent: Friday, December 26, 2014 5:51 PM
  Subject: Re: [pve-devel] Plans to Soft Fence


  Hi Cesar


  Do you know how can I trigger fence_ack automatically??

  Same sort of shell scripts... Whatever...



  2014-12-26 18:05 GMT-02:00 Cesar Peschiera br...@click.com.py:

Also remember that fence_ack_manual is available, that it is a soft fence,
it don't need a hardware of fence device, and only works when we don't have
network communication with the node that have some kind of problem.

- Original Message - From: Dietmar Maurer diet...@proxmox.com
To: Gilberto Nunes gilberto.nune...@gmail.com
Cc: pve-devel@pve.proxmox.com
Sent: Friday, December 26, 2014 3:33 PM
Subject: Re: [pve-devel] Plans to Soft Fence



fence_pve?? I can't found out such fence here... Perhaps I miss some
packet?
Where can I found it??


  That is only in newest version from git. But again, this is not suitable
  for
  real fencing.


  ___
  pve-devel mailing list
  pve-devel@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel






  -- 

  --
  A única forma de chegar ao impossível, é acreditar que é possível.
  Lewis Carroll - Alice no País das Maravilhas

  “The only way to achieve the impossible is to believe it is possible.”

  Lewis Carroll - Alice in Wonderland



  Gilberto Ferreira
  (47) 9676-7530


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Plans to Soft Fence

2014-12-26 Thread Cesar Peschiera
I would like to correct the command that should be applied:
/usr/sbin/fence_ack_manual ip address or name of the node that was power 
off manually

Also tell you them that the rgmanager service is necessary, fence join, and 
failover domain can be necessary if you have more of two nodes.

  - Original Message - 
  From: Cesar Peschiera 
  To: Gilberto Nunes ; pve-devel@pve.proxmox.com 
  Sent: Friday, December 26, 2014 6:39 PM
  Subject: Re: [pve-devel] Plans to Soft Fence


  @Gilberto:
  First that all, please post the messages to the pve-devel mailing list, so 
all people can read and correct anything that to be wrong said.

  Moreover, i do of this mode and order:

  1) I wait that a node has some kind of problem
  2) I analyze the problem (for example, maybe be a problem with some card)
  3) If the problem is serious,  i manually disconnect  the electric energy of 
the node with problems
  4) From some node that is alive, i execute manually:
  /usr/sbin/fence_ack_manual ip address or name of the node


  5) Enjoy of HA

  Notes:
  A) I prefer this mode because will be possible to analyze the problem 
patiently before of apply the manual fence. Also we must consider that if we 
don't have a fence device by Hardware, any other option is dangerous and the 
security of your information will be very committed.
  B) Also is possible to apply a bash-script in the crontab for get a automatic 
mode, but i know that it is very dangerous, and i don't want have it.


- Original Message - 
From: Gilberto Nunes 
To: Cesar Peschiera 
Sent: Friday, December 26, 2014 5:51 PM
Subject: Re: [pve-devel] Plans to Soft Fence


Hi Cesar


Do you know how can I trigger fence_ack automatically??

Same sort of shell scripts... Whatever...



2014-12-26 18:05 GMT-02:00 Cesar Peschiera br...@click.com.py:

  Also remember that fence_ack_manual is available, that it is a soft 
fence,
  it don't need a hardware of fence device, and only works when we don't 
have
  network communication with the node that have some kind of problem.

  - Original Message - From: Dietmar Maurer diet...@proxmox.com
  To: Gilberto Nunes gilberto.nune...@gmail.com
  Cc: pve-devel@pve.proxmox.com
  Sent: Friday, December 26, 2014 3:33 PM
  Subject: Re: [pve-devel] Plans to Soft Fence



  fence_pve?? I can't found out such fence here... Perhaps I miss some
  packet?
  Where can I found it??


That is only in newest version from git. But again, this is not suitable
for
real fencing.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel






-- 

--
A única forma de chegar ao impossível, é acreditar que é possível.
Lewis Carroll - Alice no País das Maravilhas

“The only way to achieve the impossible is to believe it is possible.”

Lewis Carroll - Alice in Wonderland



Gilberto Ferreira
(47) 9676-7530


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Plans to Soft Fence

2014-12-26 Thread Cesar Peschiera

@ Lindsay:

- Unplug Node1
- logon to Node2
- mv /etc/pve/nodes/Node1/qemu-server/*.conf 
/etc/pve/nodes/Node2/qemu-server/

- Manually start VM's on Node2



Is there anything else it would do?


Have you tested that works your strategy with a node that is off and not 
turned on?


- Original Message - 
From: Lindsay Mathieson

To: pve-devel@pve.proxmox.com
Sent: Friday, December 26, 2014 9:12 PM
Subject: Re: [pve-devel] Plans to Soft Fence


On Fri, 26 Dec 2014 06:39:37 PM Cesar Peschiera wrote:

4) From some node that is alive, i execute manually:
/usr/sbin/fence_ack_manual ip address or name of the node

5) Enjoy of HA



I've been manually managing the VM's on my three node cluster for now, we 
don't really need VM's transferred within seconds of a node failure - next 
day would do :)



with HA and rgmanager - is it just auto doing what I do manually?

e.g Node1 fails

- Unplug Node1

- logon to Node2

- mv /etc/pve/nodes/Node1/qemu-server/*.conf 
/etc/pve/nodes/Node2/qemu-server/


- Manually start VM's on Node2

Is there anything else it would do?

--
Lindsay



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Plans to Soft Fence

2014-12-26 Thread Cesar Peschiera

@Lindsay:
Very interesting, thanks.
Even it can operate without the PVE Cluster, always that you have a copy of 
the config of the VM.


Even if personally i prefer my strategy, due that with a script in a menu 
options, any person will can apply HA without have that remember the exact 
command line that must be applied, it only is more elegant and more easy for 
much people that isn't expert in PVE.


Anyway, today i learned something new :-)

Best regards
Cesar


- Original Message - 
From: Lindsay Mathieson lindsay.mathie...@gmail.com

To: pve-devel@pve.proxmox.com
Sent: Saturday, December 27, 2014 2:21 AM
Subject: Re: [pve-devel] Plans to Soft Fence



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-23 Thread Cesar Peschiera

Hi Alexandre

Thanks for your reply, and here my answers:


I'm interested to known what is this option ;)

Mememory  Mapped I/O Above 4 GB : Disable


Can you check that you can write to  /etc/pve?

yes, i can to write in /etc/pve
And talking about of the red lights:
After of some hours, the problem mysteriously disappeared.

Moreover, i have doubts over these 3 options (Bios Hardware):
- OS Watchdog timer (option available in all my servers)
- I/OAT DMA Engine ( i am testing with two servers DELL R320, each server
with 2 NICs Intel of 1 Gb/s, 4 ports  each one)
- Dell turbo (i don't remember the exact text),
But the Dell recommendation is enable only in performance profile
This option only appear in servers Dell R720


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Monday, December 22, 2014 2:58 PM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



After several checks, I found the problem in these two servers: a
configuration in the Hardware Bios that isn't compatible with the
pve-kernel-3.10.0-5, and my NICs was getting the link to down and after
up.
(i guess that soon i will comunicate my setup of BIOS in Dell R720).
... :-)


I'm interested to known what is this option ;)




The strange behaviour is that when i run pvecm status, i get this
message:
Version: 6.2.0
Config Version: 41
Cluster Name: ptrading
Cluster Id: 28503
Cluster Member: Yes

Cluster Generation: 8360

Membership state: Cluster-Member
Nodes: 8
Expected votes: 8
Total votes: 8
Node votes: 1
Quorum: 5
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: pve5
Node ID: 5
Multicast addresses: 239.192.111.198
Node addresses: 192.100.100.50


So, you have quorum here. All nodes are ok . I don't see any problem.



And in the PVE GUI i see the red light in all the others nodes.


That's mean that the pvestatd daemon is hanging/crashed.


Can you check that you can write to  /etc/pve.

if not, try to restart

/etc/init.d/pve-cluster restart

then

/etc/init.d/pvedaemon restart
/etc/init.d/pvestatd restart



- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com, pve-devel
pve-devel@pve.proxmox.com
Envoyé: Lundi 22 Décembre 2014 04:01:31
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off

After several checks, I found the problem in these two servers: a
configuration in the Hardware Bios that isn't compatible with the
pve-kernel-3.10.0-5, and my NICs was getting the link to down and after up.
(i guess that soon i will comunicate my setup of BIOS in Dell R720).
... :-)

But now i have other problem, with the mix of PVE-manager 3.3-5 and 2.3-13
versions in a PVE cluster of 8 nodes: I am losing quorum in several nodes
very often.

Moreover, for now i can not apply a upgrade to my old PVE nodes, so for the
moment i would like to know if is possible to make a quick configuration for
that all my nodes always has quorum.

The strange behaviour is that when i run pvecm status, i get this message:
Version: 6.2.0
Config Version: 41
Cluster Name: ptrading
Cluster Id: 28503
Cluster Member: Yes
Cluster Generation: 8360
Membership state: Cluster-Member
Nodes: 8
Expected votes: 8
Total votes: 8
Node votes: 1
Quorum: 5
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: pve5
Node ID: 5
Multicast addresses: 239.192.111.198
Node addresses: 192.100.100.50

And in the PVE GUI i see the red light in all the others nodes.

Can apply a some kind of temporal solution as Quorum: 1 for that my nodes
can work well and not has this strange behaviour? (Only until I performed
the updates)
Or, what will be the more simple and quick temporal solution for avoid to do
a upgrade in my nodes?
(something as for example: add to the rc.local file a line that says: pvecm
expected 1)

Note about of the Quorum: I don't have any Hardware fence device enabled, so
i do not care that each node always have quorum (i always can turns off the
server manually and brutally if it is necessary).

- Original Message - 
From: Cesar Peschiera br...@click.com.py

To: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Saturday, December 20, 2014 9:30 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



Hi Alexandre

I put 192.100.100.51 ip address directly to bond0, and i don't have
network
enabled (as if the node is totally isolated)

This was my setup:
--- 
auto bond0

iface bond0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

.. :-(

Some

Re: [pve-devel] qemu-server:add support for hugetlbfs

2014-12-21 Thread Cesar Peschiera

Yes i will do the test for MS-SQL-Server 2008 STD with and without THP
(Transparent Huge Pages) in a VM with 243 GB RAM (without KSM and Balloon
enabled) as soon as i have solved my problem of quorum, and i will
communicate the results.

Moreover, i guess that the best choice is that you can enable or disable
static hugepage for each VM, that in my case with MS-SQL, i guess that will
be better have it disabled, and i will be communicating it.

Moreover, to short term, with some extra HDDs in one of these servers, i 
believe that i can do tests as help for the future versions of PVE if you 
believe to will find it useful (always that the proprietary of these Dell 
servers also be agree, that as he has PVE, i guess that he also will be 
interested).


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Sunday, December 21, 2014 9:23 AM
Subject: Re: [pve-devel] qemu-server:add support for hugetlbfs



1.) Is there some edivence that it is faster (under realistic workload)?


I known that transparent hugepage can be really a problem with a lot of
database (oracle,mysql,redis,..)
I never bench it myself, but I hope that Cesar will do it ;).

Disable transparent hugepage is a solution, but it's disable it for all
the vms.

I think that static hugepage can do the job, but it need to be tested.



2.) Where do we free/dealloc those hugepages? Are the associated with the
KVM
process somehow?


I'm not sure that when hupage are defined, they use memory by default.

But when the kvm process is starting, the memory is allocated/reserverd
for the kvm process,
and the memory is free up on kvm process stop.


- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel
pve-devel@pve.proxmox.com
Envoyé: Samedi 20 Décembre 2014 09:32:42
Objet: Re: [pve-devel] qemu-server:add support for hugetlbfs


This add support for manually defined hugepages,
which can be faster than transparent hugepages for some workload like
databases


1.) Is there some edivence that it is faster (under realistic workload)?

2.) Where do we free/dealloc those hugepages? Are the associated with the
KVM
process somehow?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-21 Thread Cesar Peschiera

After several checks, I found the problem in these two servers: a
configuration in the Hardware Bios that isn't compatible with the
pve-kernel-3.10.0-5, and my NICs was getting the link to down and after up.
(i guess that soon i will comunicate my setup of BIOS in Dell R720).
... :-)

But now i have other problem, with the mix of PVE-manager 3.3-5 and 2.3-13
versions in a PVE cluster of 8 nodes: I am losing quorum in several nodes
very often.

Moreover, for now i can not apply a upgrade to my old PVE nodes, so for the
moment i would like to know if is possible to make a quick configuration for
that all my nodes always has quorum.

The strange behaviour is that when i run pvecm status, i get this message:
Version: 6.2.0
Config Version: 41
Cluster Name: ptrading
Cluster Id: 28503
Cluster Member: Yes
Cluster Generation: 8360
Membership state: Cluster-Member
Nodes: 8
Expected votes: 8
Total votes: 8
Node votes: 1
Quorum: 5
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: pve5
Node ID: 5
Multicast addresses: 239.192.111.198
Node addresses: 192.100.100.50

And in the PVE GUI i see the red light in all the others nodes.

Can apply a some kind of temporal solution as Quorum: 1 for that my nodes
can work well and not has this strange behaviour? (Only until I performed
the updates)
Or, what will be the more simple and quick temporal solution for avoid to do
a upgrade in my nodes?
(something as for example: add to the rc.local file a line that says: pvecm 
expected 1)


Note about of the Quorum: I don't have any Hardware fence device enabled, so
i do not care that each node always have quorum (i always can turns off the
server manually and brutally if it is necessary).

- Original Message - 
From: Cesar Peschiera br...@click.com.py

To: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Saturday, December 20, 2014 9:30 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



Hi Alexandre

I put 192.100.100.51 ip address directly to bond0, and i don't have
network
enabled (as if the node is totally isolated)

This was my setup:
---
auto bond0
iface bond0 inet static
address  192.100.100.51
netmask  255.255.255.0
gateway  192.100.100.4
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

.. :-(

Some other suggestion?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, December 19, 2014 7:59 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


maybe can you try to put 192.100.100.51 ip address directly to bond0,

to avoid corosync traffic going through to vmbr0.

(I remember some old offloading bugs with 10gbe nic and linux bridge)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 19 Décembre 2014 11:08:33
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


can you post your /etc/network/interfaces of theses 10gb/s nodes ?


This is my configuration:
Note: The LAN use 192.100.100.0/24

#Network interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
iface eth8 inet manual
iface eth9 inet manual
iface eth10 inet manual
iface eth11 inet manual

#PVE Cluster and VMs (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

#A link for DRBD (NICs are of 10 Gb/s):
auto bond401
iface bond401 inet static
address 10.1.1.51
netmask 255.255.255.0
slaves eth1 eth3
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond402
iface bond402 inet static
address 10.2.2.51
netmask 255.255.255.0
slaves eth4 eth6
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond403
iface bond403 inet static
address 10.3.3.51
netmask 255.255.255.0
slaves eth5 eth7
bond_miimon 100
bond_mode balance-rr
mtu 9000

#A link for the NFS-Backups (NICs are of 1 Gb/s):
auto bond10
iface bond10 inet static
address 10.100.100.51
netmask 255.255.255.0
slaves eth8 eth10
bond_miimon 100
bond_mode

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-20 Thread Cesar Peschiera

Hi Alexandre

I put 192.100.100.51 ip address directly to bond0, and i don't have network
enabled (as if the node is totally isolated)

This was my setup:
---
auto bond0
iface bond0 inet static
address  192.100.100.51
netmask  255.255.255.0
gateway  192.100.100.4
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

.. :-(

Some other suggestion?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, December 19, 2014 7:59 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off



maybe can you try to put 192.100.100.51 ip address directly to bond0,

to avoid corosync traffic going through to vmbr0.

(I remember some old offloading bugs with 10gbe nic and linux bridge)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 19 Décembre 2014 11:08:33
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off



can you post your /etc/network/interfaces of theses 10gb/s nodes ?


This is my configuration:
Note: The LAN use 192.100.100.0/24

#Network interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
iface eth8 inet manual
iface eth9 inet manual
iface eth10 inet manual
iface eth11 inet manual

#PVE Cluster and VMs (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

#A link for DRBD (NICs are of 10 Gb/s):
auto bond401
iface bond401 inet static
address 10.1.1.51
netmask 255.255.255.0
slaves eth1 eth3
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond402
iface bond402 inet static
address 10.2.2.51
netmask 255.255.255.0
slaves eth4 eth6
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond403
iface bond403 inet static
address 10.3.3.51
netmask 255.255.255.0
slaves eth5 eth7
bond_miimon 100
bond_mode balance-rr
mtu 9000

#A link for the NFS-Backups (NICs are of 1 Gb/s):
auto bond10
iface bond10 inet static
address 10.100.100.51
netmask 255.255.255.0
slaves eth8 eth10
bond_miimon 100
bond_mode balance-rr
#bond_mode active-backup
mtu 9000

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-19 Thread Cesar Peschiera

Hi Alexandre

Maybe the problem is in PVE, because:
A) When these 2 nodes has quorum (the light is green in PVE GUI), the VM 
configured in HA not turns on.


B) After, i try to start manually the VM, and i get this error message:
Executing HA start for VM 109
Member pve5 trying to enable pvevm:109...Could not connect to resource group 
manager

TASK ERROR: command 'clusvcadm -e pvevm:109 -m pve5' failed: exit code 1

C) The service rgmanager is running

D) In old versions of PVE, if the cluster communication dies, the VMs always 
continues running, in this node the VM turns off.


F) Then, when i execute reboot, the node is stopping the services, but 
when reach this text: Stopping Cluster Service Manager, the node stays 
frozen and not end the reboot.


G) So after i connect by SSH to this node, and again i execute reboot, and 
the PVE node begin to boot brutally as if the physical server recently was 
power on.



- Original Message - 
From: Cesar Peschiera br...@click.com.py

To: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Friday, December 19, 2014 2:04 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off




Yes, Multicast works (tested with omping)

Best regards
Cesar


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 12:38 PM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


when you loose the quorum, is multicast working or not ?

(test with omping for example)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 15:33:47
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off

Hi Alexandre

Many thanks for your reply, and here my answer and comments:


is multicast working ?

Yes

Here a better explanation:

- I have 8 nodes in a PVE cluster.
- 5 Nodes have PVE 3.3 version with the Kernel 3.10.0-19 version
- 3 Nodes have PVE 2.3 version with the Kernel 2.6.32-19 version
- Of the 5 nodes with PVE 3.3 version, 2 of them has NICs of 10 Gb/s
- I have only a VM in HA running in a PVE node that have the NIC of 10 
Gb/s,

the other node with the NIC of 10 Gb/s is his pair of HA and not has VMs
running
- In other pair of nodes (with NICs of 1 Gb/s and kernel 3.10.0-19 
version)

also has VMs in HA, but never i had problems with these nodes.

So in my PVE cluster of eight nodes, the problem of quorum begins when the
VM that is on the node that have the NIC of 10 Gb/s has activity, and only
after some hours, and this disagreeable happening of loss of quorum occur
simultaneously in my two nodes that has the NICs of 10 Gb/s.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 9:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


Hi Cesar,

first question :

is multicast working ?

https://pve.proxmox.com/wiki/Multicast_notes#Testing_multicast

- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 01:33:34
Objet: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMs 
turns

off

Hi to all.

I have a serious problem with the loss of quorum in two nodes that has 
Intel

NICs of 10 gb/s.
Also the VM of this node is turned off
I don't know if it is a bug or i missed of something.

Please see this link, and if you can help me, please do it:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-19 Thread Cesar Peschiera

can you post your /etc/network/interfaces of theses 10gb/s nodes ?


This is my configuration:
Note: The LAN use 192.100.100.0/24

#Network interfaces
auto lo
iface lo inet loopback

iface eth0  inet manual
iface eth1  inet manual
iface eth2  inet manual
iface eth3  inet manual
iface eth4  inet manual
iface eth5  inet manual
iface eth6  inet manual
iface eth7  inet manual
iface eth8  inet manual
iface eth9  inet manual
iface eth10 inet manual
iface eth11 inet manual

#PVE Cluster and VMs  (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
   slaves eth0 eth2
   bond_miimon 100
   bond_mode 802.3ad
   bond_xmit_hash_policy layer2

#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
   address  192.100.100.51
   netmask  255.255.255.0
   gateway  192.100.100.4
   bridge_ports bond0
   bridge_stp off
   bridge_fd 0
   post-up echo 0  
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping

   post-up echo 1  /sys/class/net/vmbr0/bridge/multicast_querier

#A link for DRBD (NICs are of 10 Gb/s):
auto bond401
iface bond401 inet static
   address  10.1.1.51
   netmask  255.255.255.0
   slaves   eth1 eth3
   bond_miimon 100
   bond_mode balance-rr
   mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond402
iface bond402 inet static
   address  10.2.2.51
   netmask  255.255.255.0
   slaves   eth4 eth6
   bond_miimon 100
   bond_mode balance-rr
   mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond403
iface bond403 inet static
   address  10.3.3.51
   netmask  255.255.255.0
   slaves   eth5 eth7
   bond_miimon 100
   bond_mode balance-rr
   mtu 9000

#A link for the NFS-Backups (NICs are of 1 Gb/s):
auto bond10
iface bond10 inet static
   address  10.100.100.51
   netmask  255.255.255.0
   slaves eth8 eth10
   bond_miimon 100
   bond_mode balance-rr
   #bond_mode active-backup
   mtu 9000

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-18 Thread Cesar Peschiera
Thank you very much for the offer, but until now i don't understand as how 
my VM can gain speed, and as i understand:

a) My VM (Win 2008R2) can not use hugetlbfs
b) Hugepages of 1 GB. is recommended for nodes with some terabytes of RAM, 
and my VM only has assigned 251 GB. of RAM.


Can you explain in theory, why will be better?


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 4:23 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Also,
hugetlbfs can use 1GB pages vs 2M pages with transparent,

I'll send a patch today, It's really easy to implement it.


- Mail original -
De: aderumier aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 08:11:47
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Many thanks for your answer, but i am not sure if it is a good idea due to
that i don't understand the advantage since that i can disable the huge
pages of this mode:


They are 2 modes for hugepages,

the transparent hugepage mode, managed by the kernel

But also, old way, manual hugepages (aka hugetlbfs, mounted in 
/dev/hugepage..)


from:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-transhuge.html

 However, transparent hugepage mode is not recommended for database 
workloads.



So, hugepages are always usefull for big memory vms, because it's reduce cpu 
usage on memory access.
But transparent hugepage sometimes don't work good with some workloads like 
database.


I think than manually defined hugepages can give good an extra boost vs 
disable TBL.




Moreover, i have a serious problem with the PVE cluster communication in 
two

of eight PVE nodes, and with a VM, if you can help me, i will be extremely
grateful, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995


Do you mix 2.6.32 and 3.10 kernel in your cluster ?
(I have had some strange problems with mixed kernel, never find the 
problem).


Also, this could be a multicast snooping problem.

What are your hardware switches ?
Myself, I disable snooping on linux vmbr, enable snooping on physical 
swiches + igmp querier on physical switches.




- Mail original - 
De: Cesar Peschiera br...@click.com.py
À: pve-devel pve-devel@pve.proxmox.com, aderumier 
aderum...@odiso.com

Envoyé: Jeudi 18 Décembre 2014 07:09:11
Objet: Fw: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi Alexandre

Many thanks for your answer, but i am not sure if it is a good idea due to
that i don't understand the advantage since that i can disable the huge
pages of this mode:

shell vim /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT=...transparent_hugepage=never

shell update-grub

Moreover, i have a serious problem with the PVE cluster communication in two
of eight PVE nodes, and with a VM, if you can help me, i will be extremely
grateful, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 2:24 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Note that currently,

hugepages are managed with transparent hugepage mecanism.

But it's seem that we can defined manually hugepages by numa nodes

-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0,memdev=ram-node0
-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=1,memdev=ram-node1


I'll try to make a patch if you want to test.


- Mail original - 
De: aderumier aderum...@odiso.com

À: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 06:15:44
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Moreover, i guess that i have problems of Hugapages.


I have found interesting blog:

http://developerblog.redhat.com/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/

It's explain how to see if hugepage impact performance or not.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com, pve-devel
pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 03:44:33
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi Alexandre

I have installed your patches and with some test of MS-SQL-Server, i see a
better behavior in terms of speed (soon i will give the comparisons).

Moreover, i guess that i have problems of Hugapages.
Please see

Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-18 Thread Cesar Peschiera

Hi Alexandre

Until you correct the problem, and for avoid the loss of cluster 
communication in these two nodes, can be a temporal solution that I add to 
rc.local file this line:

/usr/bin/pvecm expected 1

As i don't have a fence device by Hardware, and i know that for apply HA 
with manual fence, first i should disconnect the electrical energy in the 
node that has the bad behaviour, maybe this solution can be util for get the 
bread and the cake.



- Original Message - 
From: Cesar Peschiera br...@click.com.py

To: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 4:45 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM



Do you mix 2.6.32 and 3.10 kernel in your cluster ?
Yes, but the problem is only on the nodes that has NICs of 10 Gb/s, the 
other PVE nodes that has NICs of 1 Gb/s and the 3.10 kernel never lost the 
cluster communication.



What are your hardware switches ?

Dell N2024 (managed, and for the moment with igmp snooping disabled)

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 4:11 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Many thanks for your answer, but i am not sure if it is a good idea due 
to

that i don't understand the advantage since that i can disable the huge
pages of this mode:


They are 2 modes for hugepages,

the transparent hugepage mode, managed by the kernel

But also, old way, manual hugepages (aka hugetlbfs, mounted in 
/dev/hugepage..)


from:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-transhuge.html

 However, transparent hugepage mode is not recommended for database 
workloads.



So, hugepages are always usefull for big memory vms, because it's reduce 
cpu usage on memory access.
But transparent hugepage sometimes don't work good with some workloads 
like database.


I think than manually defined hugepages can give good an extra boost vs 
disable TBL.




Moreover, i have a serious problem with the PVE cluster communication in 
two
of eight PVE nodes, and with a VM, if you can help me, i will be 
extremely

grateful, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995


Do you mix 2.6.32 and 3.10 kernel in your cluster ?
(I have had some strange problems with mixed kernel, never find the 
problem).


Also, this could be a multicast snooping problem.

What are your hardware switches ?
Myself, I disable snooping on linux vmbr, enable snooping on physical 
swiches + igmp querier on physical switches.




- Mail original -
De: Cesar Peschiera br...@click.com.py
À: pve-devel pve-devel@pve.proxmox.com, aderumier 
aderum...@odiso.com

Envoyé: Jeudi 18 Décembre 2014 07:09:11
Objet: Fw: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi Alexandre

Many thanks for your answer, but i am not sure if it is a good idea due to
that i don't understand the advantage since that i can disable the huge
pages of this mode:

shell vim /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT=...transparent_hugepage=never

shell update-grub

Moreover, i have a serious problem with the PVE cluster communication in 
two

of eight PVE nodes, and with a VM, if you can help me, i will be extremely
grateful, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 2:24 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Note that currently,

hugepages are managed with transparent hugepage mecanism.

But it's seem that we can defined manually hugepages by numa nodes

-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0,memdev=ram-node0
-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=1,memdev=ram-node1


I'll try to make a patch if you want to test.


- Mail original - 
De: aderumier aderum...@odiso.com

À: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 06:15:44
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Moreover, i guess that i have problems of Hugapages.


I have found interesting blog:

http://developerblog.redhat.com/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/

It's explain how to see if hugepage impact performance or not.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com, pve-devel
pve

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-18 Thread Cesar Peschiera

Hi Alexandre

Many thanks for your reply, and here my answer and comments:


is multicast working ?

Yes

Here a better explanation:

- I have 8 nodes in a PVE cluster.
- 5 Nodes have PVE 3.3 version with the Kernel 3.10.0-19 version
- 3 Nodes have PVE 2.3 version with the Kernel 2.6.32-19 version
- Of the 5 nodes with PVE 3.3 version, 2 of them has NICs of 10 Gb/s
- I have only a VM in HA running in a PVE node that have the NIC of 10 Gb/s, 
the other node with the NIC of 10 Gb/s is his pair of HA and not has VMs 
running
- In other pair of nodes (with NICs of 1 Gb/s and kernel 3.10.0-19 version) 
also has VMs in HA, but never i had problems with these nodes.


So in my PVE cluster of eight nodes, the problem of quorum begins when the 
VM that is on the node that have the NIC of 10 Gb/s has activity, and only 
after some hours, and this disagreeable happening of loss of quorum occur 
simultaneously in my two nodes that has the NICs of 10 Gb/s.



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 9:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off



Hi Cesar,

first question :

is multicast working ?

https://pve.proxmox.com/wiki/Multicast_notes#Testing_multicast

- Mail original -
De: Cesar Peschiera br...@click.com.py
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 01:33:34
Objet: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMs turns 
off


Hi to all.

I have a serious problem with the loss of quorum in two nodes that has Intel
NICs of 10 gb/s.
Also the VM of this node is turned off
I don't know if it is a bug or i missed of something.

Please see this link, and if you can help me, please do it:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-18 Thread Cesar Peschiera
When you get the loss the quorum in these two nodes, i will do the test with 
omping, etc.


Many thanks for your reply ... :-)
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 12:38 PM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off



when you loose the quorum, is multicast working or not ?

(test with omping for example)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 15:33:47
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off


Hi Alexandre

Many thanks for your reply, and here my answer and comments:


is multicast working ?

Yes

Here a better explanation:

- I have 8 nodes in a PVE cluster.
- 5 Nodes have PVE 3.3 version with the Kernel 3.10.0-19 version
- 3 Nodes have PVE 2.3 version with the Kernel 2.6.32-19 version
- Of the 5 nodes with PVE 3.3 version, 2 of them has NICs of 10 Gb/s
- I have only a VM in HA running in a PVE node that have the NIC of 10 Gb/s,
the other node with the NIC of 10 Gb/s is his pair of HA and not has VMs
running
- In other pair of nodes (with NICs of 1 Gb/s and kernel 3.10.0-19 version)
also has VMs in HA, but never i had problems with these nodes.

So in my PVE cluster of eight nodes, the problem of quorum begins when the
VM that is on the node that have the NIC of 10 Gb/s has activity, and only
after some hours, and this disagreeable happening of loss of quorum occur
simultaneously in my two nodes that has the NICs of 10 Gb/s.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 9:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


Hi Cesar,

first question :

is multicast working ?

https://pve.proxmox.com/wiki/Multicast_notes#Testing_multicast

- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 01:33:34
Objet: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMs turns
off

Hi to all.

I have a serious problem with the loss of quorum in two nodes that has Intel
NICs of 10 gb/s.
Also the VM of this node is turned off
I don't know if it is a bug or i missed of something.

Please see this link, and if you can help me, please do it:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2014-12-18 Thread Cesar Peschiera

Yes, Multicast works (tested with omping)

Best regards
Cesar


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 12:38 PM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


when you loose the quorum, is multicast working or not ?

(test with omping for example)


- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 15:33:47
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off

Hi Alexandre

Many thanks for your reply, and here my answer and comments:


is multicast working ?

Yes

Here a better explanation:

- I have 8 nodes in a PVE cluster.
- 5 Nodes have PVE 3.3 version with the Kernel 3.10.0-19 version
- 3 Nodes have PVE 2.3 version with the Kernel 2.6.32-19 version
- Of the 5 nodes with PVE 3.3 version, 2 of them has NICs of 10 Gb/s
- I have only a VM in HA running in a PVE node that have the NIC of 10 Gb/s,
the other node with the NIC of 10 Gb/s is his pair of HA and not has VMs
running
- In other pair of nodes (with NICs of 1 Gb/s and kernel 3.10.0-19 version)
also has VMs in HA, but never i had problems with these nodes.

So in my PVE cluster of eight nodes, the problem of quorum begins when the
VM that is on the node that have the NIC of 10 Gb/s has activity, and only
after some hours, and this disagreeable happening of loss of quorum occur
simultaneously in my two nodes that has the NICs of 10 Gb/s.


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 9:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


Hi Cesar,

first question :

is multicast working ?

https://pve.proxmox.com/wiki/Multicast_notes#Testing_multicast

- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 01:33:34
Objet: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMs turns
off

Hi to all.

I have a serious problem with the loss of quorum in two nodes that has Intel
NICs of 10 gb/s.
Also the VM of this node is turned off
I don't know if it is a bug or i missed of something.

Please see this link, and if you can help me, please do it:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMs turns off

2014-12-17 Thread Cesar Peschiera

Hi to all.

I have a serious problem with the loss of quorum in two nodes that has Intel 
NICs of 10 gb/s.

Also the VM of this node is turned off
I don't know if it is a bug or i missed of something.

Please see this link, and if you can help me, please do it:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-17 Thread Cesar Peschiera

Hi Alexandre

I have installed your patches and with some test of MS-SQL-Server, i see a
better behavior in terms of speed (soon i will give the comparisons).

Moreover, i guess that i have problems of Hugapages.
Please see this link, and answer me if you can:
http://forum.proxmox.com/threads/20449-Win2008R2-exaggeratedly-slow-with-256GB-RAM-and-strange-behaviours-in-PVE?p=104996#post104996


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 9:50 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Hi,
can you test this:

http://odisoweb1.odiso.net/pve-qemu-kvm_2.2-2_amd64.deb
http://odisoweb1.odiso.net/qemu-server_3.3-5_amd64.deb


then edit your vm config file:


sockets: 2
cores: 4
memory: 262144
numa0: memory=131072,policy=bind
numa1: memory=131072,policy=bind


(you need 1 numa by socket, total numa memory must be equal to vm memory).

you can change cores number if you want.


and start the vm ?


- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 12:40:29
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi,

some news.

It's seem that current proxmox qemu build don't have numa support enable.

So, previous command line don't work.


I'll send a patch for pve-qemu-kvm and also to add numa options to vm config
file.



- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 07:05:47
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


at i would like to ask you if you can give me your suggestions in
practical terms, besides the brief theoretical explanation, this is due to
that i am not a developer and i don't understand as apply it in my PVE.


About the command line, each vm is a kvm process.

So start your vm with current config, do a ps -aux , copy the big kvm -id
...  command line for your vm,

stop the vm.

then,

add my specials lines about numa,

and paste the command line to start the vm !


(kvm is so simple ;)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Fw: Error in PVE with win2008r2 and 256GB RAM

2014-12-17 Thread Cesar Peschiera

Hi Alexandre

Many thanks for your answer, but i am not sure if it is a good idea due to 
that i don't understand the advantage since that i can disable the huge 
pages of this mode:


shell vim /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT=...transparent_hugepage=never

shell update-grub

Moreover, i have a serious problem with the PVE cluster communication in two 
of eight PVE nodes, and with a VM, if you can help me, i will be extremely 
grateful, please see this link:

http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 2:24 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Note that currently,

hugepages are managed with transparent hugepage mecanism.

But it's seem that we can defined manually hugepages by numa nodes

-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0,memdev=ram-node0
-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=1,memdev=ram-node1


I'll try to make a patch if you want to test.


- Mail original -
De: aderumier aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 06:15:44
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Moreover, i guess that i have problems of Hugapages.


I have found interesting blog:

http://developerblog.redhat.com/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/

It's explain how to see if hugepage impact performance or not.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com, pve-devel
pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 03:44:33
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi Alexandre

I have installed your patches and with some test of MS-SQL-Server, i see a
better behavior in terms of speed (soon i will give the comparisons).

Moreover, i guess that i have problems of Hugapages.
Please see this link, and answer me if you can:
http://forum.proxmox.com/threads/20449-Win2008R2-exaggeratedly-slow-with-256GB-RAM-and-strange-behaviours-in-PVE?p=104996#post104996


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 9:50 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Hi,
can you test this:

http://odisoweb1.odiso.net/pve-qemu-kvm_2.2-2_amd64.deb
http://odisoweb1.odiso.net/qemu-server_3.3-5_amd64.deb


then edit your vm config file:


sockets: 2
cores: 4
memory: 262144
numa0: memory=131072,policy=bind
numa1: memory=131072,policy=bind


(you need 1 numa by socket, total numa memory must be equal to vm memory).

you can change cores number if you want.


and start the vm ?


- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 12:40:29
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi,

some news.

It's seem that current proxmox qemu build don't have numa support enable.

So, previous command line don't work.


I'll send a patch for pve-qemu-kvm and also to add numa options to vm 
config

file.



- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 07:05:47
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


at i would like to ask you if you can give me your suggestions in
practical terms, besides the brief theoretical explanation, this is due 
to

that i am not a developer and i don't understand as apply it in my PVE.


About the command line, each vm is a kvm process.

So start your vm with current config, do a ps -aux , copy the big 
kvm -id

...  command line for your vm,

stop the vm.

then,

add my specials lines about numa,

and paste the command line to start the vm !


(kvm is so simple ;)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-17 Thread Cesar Peschiera

Do you mix 2.6.32 and 3.10 kernel in your cluster ?
Yes, but the problem is only on the nodes that has NICs of 10 Gb/s, the 
other PVE nodes that has NICs of 1 Gb/s and the 3.10 kernel never lost the 
cluster communication.



What are your hardware switches ?

Dell N2024 (managed, and for the moment with igmp snooping disabled)

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 4:11 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM



Many thanks for your answer, but i am not sure if it is a good idea due to
that i don't understand the advantage since that i can disable the huge
pages of this mode:


They are 2 modes for hugepages,

the transparent hugepage mode, managed by the kernel

But also, old way, manual hugepages (aka hugetlbfs, mounted in 
/dev/hugepage..)


from:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-transhuge.html

 However, transparent hugepage mode is not recommended for database 
workloads.



So, hugepages are always usefull for big memory vms, because it's reduce cpu 
usage on memory access.
But transparent hugepage sometimes don't work good with some workloads like 
database.


I think than manually defined hugepages can give good an extra boost vs 
disable TBL.




Moreover, i have a serious problem with the PVE cluster communication in 
two

of eight PVE nodes, and with a VM, if you can help me, i will be extremely
grateful, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995


Do you mix 2.6.32 and 3.10 kernel in your cluster ?
(I have had some strange problems with mixed kernel, never find the 
problem).


Also, this could be a multicast snooping problem.

What are your hardware switches ?
Myself, I disable snooping on linux vmbr, enable snooping on physical 
swiches + igmp querier on physical switches.




- Mail original -
De: Cesar Peschiera br...@click.com.py
À: pve-devel pve-devel@pve.proxmox.com, aderumier 
aderum...@odiso.com

Envoyé: Jeudi 18 Décembre 2014 07:09:11
Objet: Fw: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi Alexandre

Many thanks for your answer, but i am not sure if it is a good idea due to
that i don't understand the advantage since that i can disable the huge
pages of this mode:

shell vim /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT=...transparent_hugepage=never

shell update-grub

Moreover, i have a serious problem with the PVE cluster communication in two
of eight PVE nodes, and with a VM, if you can help me, i will be extremely
grateful, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104995#post104995

Best regards
Cesar

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 2:24 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Note that currently,

hugepages are managed with transparent hugepage mecanism.

But it's seem that we can defined manually hugepages by numa nodes

-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0,memdev=ram-node0
-object
memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=1,memdev=ram-node1


I'll try to make a patch if you want to test.


- Mail original - 
De: aderumier aderum...@odiso.com

À: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 06:15:44
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Moreover, i guess that i have problems of Hugapages.


I have found interesting blog:

http://developerblog.redhat.com/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/

It's explain how to see if hugepage impact performance or not.



- Mail original - 
De: Cesar Peschiera br...@click.com.py

À: aderumier aderum...@odiso.com, pve-devel
pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 03:44:33
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi Alexandre

I have installed your patches and with some test of MS-SQL-Server, i see a
better behavior in terms of speed (soon i will give the comparisons).

Moreover, i guess that i have problems of Hugapages.
Please see this link, and answer me if you can:
http://forum.proxmox.com/threads/20449-Win2008R2-exaggeratedly-slow-with-256GB-RAM-and-strange-behaviours-in-PVE?p=104996#post104996


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 9:50 AM
Subject: Re: [pve

Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora proyect

2014-12-09 Thread Cesar Peschiera
are you sure that virtio-net driver is correctly upgraded to last version 
?

Yes,  i saw the creation date on the window of the configuration NIC.
Tested in Dell poweredge 9500 and Dell R720 (each one with with 2 sockets)

And if you want more speed of network, please see this link about of I/OAT 
DMA Engine, and if you can, also answer my questions.

http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104678#post104678

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Saturday, December 06, 2014 2:36 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora 
proyect



mine was a dell r620, 2x intel 8 cores, so almost same than you.

but win2012r2.

I'll do tests with win2008r2 next week.


(are you sure that virtio-net driver is correctly upgraded to last version 
?)




- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Samedi 6 Décembre 2014 04:22:34
Objet: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora 
proyect


I have tested with:
Server: Dell r720 (New)
PVE latest version with kernel 3.10 version
RAM total in Hardware: 256 GB.
RAM for the Win2008R2 SP1 VM: 248 GB.
Processors: 2 sockets of 10 cores each one (20 threads with Hyperthreading)

Maybe numa (in Bios hardware DELL R720 configured) have something that see
in this topic the bsod (that appears sometimes)


- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Friday, December 05, 2014 8:30 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect


I have tested with win2012R2,

I don't have BSOD, but network don't work when multiqueue is enabled.

I'll try to dig a little more.


- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Vendredi 5 Décembre 2014 08:18:25
Objet: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect


I have tested this:

Win2008R2 SP1 VM +
net0: ..,queues=numberofqueue
- 
= bsod


With 8 and 4 numbers of queues, it isn't stable.


Thanks for the report, I'll try today with win2008R2 and win2012r2



- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: Cesar Peschiera br...@click.com.py, Alexandre DERUMIER
aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Vendredi 5 Décembre 2014 08:10:24
Objet: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect

Hi

I have tested this:

Win2008R2 SP1 VM +
net0: ..,queues=numberofqueue
- 
= bsod


With 8 and 4 numbers of queues, it isn't stable.

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 10:25 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in
fedora proyect



Will work network multiqueue support?,
Does QEMU support for network multiqueue?

yes, and proxmox already support it. (not in gui)

just edit your vm config file:

net0: ..,queues=numberofqueue

(and use new drivers, if not I think you'll have a bsod)

it should improve rx or tx, I don't remember exactly.




- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 14:21:20
Objet: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect

Hi Alexandre

Will work network multiqueue support?,
Does QEMU support for network multiqueue?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 2:57 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in
fedora
proyect


Oh Great !

I known that the net driver has been greatly improved with multiqueue
support.

and also some critical bugfix in virtio-blk flush (fua) support.

I'll test it this week :)

- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 03:49:11
Objet: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect

Yesterday fedora proyect has a new version of Virtio Win drivers, it is
the
0.1-94 version, that can be downloaded of here:
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-94.iso

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] Problems when i applied a change in the ID number of the nodes in the cluster.conf file

2014-12-05 Thread Cesar Peschiera

Hi

A little of help please...

I have a mix of PVEs in my PVE cluster.

- Some Servers has PVE 2.3
- Other: PVE 3.1
- Others: PVE 3.2-4 (2 nodes in HA with DRBD)
- Others: PVE 3.3-5 (2 nodes in HA with DRBD)

When i added the latest 2 servers (with PVE 3.3-5), this PVEs lose the PVE
cluster communication, so these are my actions:
1) I removed the configuration of HA of all my PVE nodes in the cluster.conf
file (at this point, i don't see problems)
2) After, i changed the ID of my PVE nodes in the cluster.conf file (i only
did want this: that match the ID number of cluster.conf file with the
number of PVE hostname that i have installed).


From this point, all my PVE nodes lost the cluster communication.


My question:
How can i fix the cluster communication? (with livecd or any thing that
help)

Note: If is necessary ,  I can power off my VMs and PVE hosts, only i want
to know how solve this problem without the need of a reinstall.

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-04 Thread Cesar Peschiera

Hi Alexandre

Now, with the PVE kernel 3.10, when i uncomment the line:
#push @$cpuFlags , 'hv_vapic if !$nokvm;  #fixme, my win2008R2 hang at boot
with this

And after i do a restart:
shell /etc/init.d/pvedaemon restart

I see these error messages:
---
Restarting PVE Daemon: pvedaemonBareword found where operator expected at
/usr/share/perl5/PVE/QemuServer.pm line 2639, near push @$cpuFlags ,
'hv_spinlocks
 (Might be a runaway multi-line '' string starting on line 2638)
   (Do you need to predeclare push?)
String found where operator expected at /usr/share/perl5/PVE/QemuServer.pm
line 2642, near if ($ost eq '
 (Might be a runaway multi-line '' string starting on line 2639)
   (Missing semicolon on previous line?)
Bad name after win7' at /usr/share/perl5/PVE/QemuServer.pm line 2642.
Compilation failed in require at /usr/share/perl5/PVE/VZDump/QemuServer.pm
line 14.
BEGIN failed--compilation aborted at
/usr/share/perl5/PVE/VZDump/QemuServer.pm line 14.
Compilation failed in require at /usr/share/perl5/PVE/VZDump.pm line 32.
Attempt to reload PVE/QemuServer.pm aborted.
Compilation failed in require at /usr/share/perl5/PVE/API2/Nodes.pm line 25.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2/Nodes.pm line
25.
Compilation failed in require at /usr/share/perl5/PVE/API2.pm line 14.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2.pm line 14.
Compilation failed in require at /usr/bin/pvedaemon line 14.
BEGIN failed--compilation aborted at /usr/bin/pvedaemon line 14.
(warning).

Best regards
Cesar



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Monday, December 01, 2014 3:55 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


also, can you post your vm config file ?

Proxmox use some hyper-v features to help for some case, including high
memory.

But currently, 2 features are missing.

in /usr/share/perl5/PVE/QemuServer


   if ($ost eq 'win7' || $ost eq 'win8' || $ost eq 'w2k8' ||
   $ost eq 'wvista') {
   push @$globalFlags, 'kvm-pit.lost_tick_policy=discard';
   push @$cmd, '-no-hpet';
   #push @$cpuFlags , 'hv_vapic if !$nokvm;  #fixme, my win2008R2
hang at boot with this
   push @$cpuFlags , 'hv_spinlocks=0x' if !$nokvm;
   }

   if ($ost eq 'win7' || $ost eq 'win8') {
   push @$cpuFlags , 'hv_relaxed' if !$nokvm;
   }


maybe can your try to uncomment
#push @$cpuFlags , 'hv_vapic if !$nokvm;  #fixme, my win2008R2 hang at
boot with this

and restart
/etc/init.d/pvedaemon restart

and start your vm again.

(I think they was a bug in previous kernel, but maybe it's fixed now).


Another missing feature is hv_time, paravirtualized clock, but AFAIK it's
only work with 3.10 kernel.

so, you can try

   if ($ost eq 'win7' || $ost eq 'win8') {
   push @$cpuFlags , 'hv_relaxed' if !$nokvm;
   push @$cpuFlags , 'hv_time' if !$nokvm;
   }


- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 1 Décembre 2014 06:47:35
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


3) In parallel to this strange behavior, htop on PVE is showing that the
process that consume a lot of processor is: /usr/bin/kvm, this behavior
is
repetitive in all threads of processor that the VM has access.


The kvm process is your vm ;) (each guestvm is a kvm process)



4) In parallel to this strange behavior, while that the VM is configured
with 62GB RAM, htop on PVE is showing that the use of the memory is
growing
in each second that elapses, and when his memory bar say that have used
63964/257912MB, the consumption of threads of processors of this VM
returns to normal state. While that the VM has more RAM, the behavior is
the
same, but the VM takes longer time to reach to a normal state.


At boot, windows fill the memory with zero. and that's use cpu

BTW, do you use balloning/dynamic memory feature of proxmox ?
can you try to add balloon:0 in your config. (to disable the balloon device)



5) As a second test, after of see all these behaviours, I log to Windows
Server, and htop show me a high consumption of many threads of processors
(+/- 50%), but after that the session was initiated, the consumption of
processors returns to normal state.



6) In htop, i see the same behaviour of consumption of processor while
that
a session of windows is closing, i guess that any thing that i do in this
VM
will consume processor resources extra needlessly.


really don't known, this should require some cpu profiling inside windows.



- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: pve-devel@pve.proxmox.com
Envoyé: Lundi 1 Décembre 2014 06:35:14
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi to PVE team developers

Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora proyect

2014-12-04 Thread Cesar Peschiera

Hi

I have tested this:

Win2008R2 SP1 VM+
net0: ..,queues=numberofqueue
-
= bsod

With 8 and 4 numbers of queues, it isn't stable.

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 10:25 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in 
fedora proyect




Will work network multiqueue support?,
Does QEMU support for network multiqueue?

yes, and proxmox already support it. (not in gui)

just edit your vm config file:

net0: ..,queues=numberofqueue

(and use new drivers, if not I think you'll have a bsod)

it should improve rx or tx, I don't remember exactly.




- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: Alexandre DERUMIER aderum...@odiso.com, pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 14:21:20
Objet: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora 
proyect


Hi Alexandre

Will work network multiqueue support?,
Does QEMU support for network multiqueue?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 2:57 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in 
fedora

proyect


Oh Great !

I known that the net driver has been greatly improved with multiqueue
support.

and also some critical bugfix in virtio-blk flush (fua) support.

I'll test it this week :)

- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 03:49:11
Objet: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect

Yesterday fedora proyect has a new version of Virtio Win drivers, it is 
the

0.1-94 version, that can be downloaded of here:
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-94.iso

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] ballon: 0 don't work with kernel 3.10

2014-12-03 Thread Cesar Peschiera

Hi PVE team.

The option of configuration of PVE VM ballon: 0 don't work with the kernel 
3.10.0-5-pve.


Best regards
Cesar




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-02 Thread Cesar Peschiera

Hi Alexandre

Thanks for your reply

I am not sure that will be necessary, please read the PDF document that is
in link Web below.
It is saying many thing, as for example numa and huge pages are working in
automatic mode, the optimization is in the fly, i guess that i don't need
worry more for it.
(My Server has 256 GB RAM, and the VM will have 248 GB, RAM)

Please, especially see this titles:
(The support of these features is in the kernel)
- Automatic NUMA Balancing
- Configuring Transparent Huge Pages (automatic)

What do you think? ...
Will I need to worry me for it? (numa and huge pages) ..
I would like hear a second opinion

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/pdf/Virtualization_Tuning_and_Optimization_Guide/Red_Hat_Enterprise_Linux-7-Virtualization_Tuning_and_Optimization_Guide-en-US.pdf

Moreover, if i need to add your lines, as the VM will be in HA, how i will
do it?

Best regards
Cesar



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 3:05 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM



at i would like to ask you if you can give me your suggestions in
practical terms, besides the brief theoretical explanation, this is due
to
that i am not a developer and i don't understand as apply it in my PVE.


About the command line, each vm is a kvm process.

So start your vm with current config, do a ps -aux ,  copy the  big
kvm -id ...  command line for your vm,

stop the vm.

then,

add my specials lines about numa,

and paste the command line to start the vm !


(kvm is so simple ;)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora proyect

2014-12-02 Thread Cesar Peschiera

Hi Alexandre

Will work network multiqueue support?,
Does QEMU support for network multiqueue?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 2:57 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect


Oh Great !

I known that the net driver has been greatly improved with multiqueue
support.

and also some critical bugfix in virtio-blk flush (fua) support.

I'll test it this week :)

- Mail original - 


De: Cesar Peschiera br...@click.com.py
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 03:49:11
Objet: [pve-devel] New virtio-win driver 0.1-94.iso version in fedora
proyect

Yesterday fedora proyect has a new version of Virtio Win drivers, it is the
0.1-94 version, that can be downloaded of here:
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-94.iso

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-02 Thread Cesar Peschiera

OOWill be wonderful !!!

I don't want to wait for have such patchs.

The Servers (that will be in HA) will be in porduction in few days

Many thanks and i will wait.

Important Question :
For huge pages: Will need PVE a patch?

- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 8:40 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Hi,

some news.

It's seem that current proxmox qemu build don't have numa support enable.

So, previous command line don't work.


I'll send a patch for pve-qemu-kvm and also to add numa options to vm config 
file.




- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 07:05:47
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


at i would like to ask you if you can give me your suggestions in
practical terms, besides the brief theoretical explanation, this is due to
that i am not a developer and i don't understand as apply it in my PVE.


About the command line, each vm is a kvm process.

So start your vm with current config, do a ps -aux , copy the big kvm -id 
...  command line for your vm,


stop the vm.

then,

add my specials lines about numa,

and paste the command line to start the vm !


(kvm is so simple ;)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-12-02 Thread Cesar Peschiera

Hi Alexandre

OOOooo ...Many thanks for do the patches,
...I will be installing.

Moreover,  i would like to tell you two things:
1) I have these packages installed in my two servers (configured in HA and 
both are equals in software and hardware)

pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-3.10.0-5-pve: 3.10.0-19
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

The question is: will work your patches with these packages installed?

2) When i do the tests, what command you want that I run? (That way I can 
send you the results)





- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com

To: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Sent: Tuesday, December 02, 2014 9:50 AM
Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


Hi,
can you test this:

http://odisoweb1.odiso.net/pve-qemu-kvm_2.2-2_amd64.deb
http://odisoweb1.odiso.net/qemu-server_3.3-5_amd64.deb


then edit your vm config file:


sockets: 2
cores: 4
memory: 262144
numa0: memory=131072,policy=bind
numa1: memory=131072,policy=bind


(you need 1 numa by socket, total numa memory must be equal to vm memory).

you can change cores number if you want.


and start the vm ?


- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 12:40:29
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

Hi,

some news.

It's seem that current proxmox qemu build don't have numa support enable.

So, previous command line don't work.


I'll send a patch for pve-qemu-kvm and also to add numa options to vm config 
file.




- Mail original - 


De: Alexandre DERUMIER aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 07:05:47
Objet: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM


at i would like to ask you if you can give me your suggestions in
practical terms, besides the brief theoretical explanation, this is due to
that i am not a developer and i don't understand as apply it in my PVE.


About the command line, each vm is a kvm process.

So start your vm with current config, do a ps -aux , copy the big kvm -id 
...  command line for your vm,


stop the vm.

then,

add my specials lines about numa,

and paste the command line to start the vm !


(kvm is so simple ;)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  1   2   3   >