[ovirt-users] network performance issues

2019-07-02 Thread Guy Brodny
Hello.
we have installed Ovirt on HPE ProLiant XL270d Gen10.
The server is installed with 2 * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz,
1.5T Ram, 23T Local SSD, 4 Nvidia V100 32GB GPU and 2* Mellanox
Technologies MT27800 Family [ConnectX-5]


The Mellanox card are 2 100Gbit DualPort each connected with a single
100Gbit port to a Mellanox 100Gbit switch which has access to a 16GByte
storage server.
MTU is set to 9000 for jumbo packets.

Running our application on a VM created on top of the server is having
lower network performance vs the host.
I tried both normal ovirt network driver, using Virtual Function and
PCI-passthrough moving one of the Mellanox card directly to the VM but i
never get the same result as on the host.
I also tried allocating the numa bush directly to the VM but it doesnt
improve the results.

Im testing using FIO to perform sequential reads from the storage, this is
the command we run both on the host and the VM.

sqream@host-3-171 /media/StorONE1/tpch10t_for_4.0/logs/192.168.3.171_5000 $
fio --randrepeat=1 --ioengine=sync --direct=1 --gtod_reduce=1 --name=test
--filename=/media/StorONE3/t1e1st221www22.file --bs=2m --iodepth=24
--size=25G  --numjobs=14  --readwrite=read --rwmixread=100
Run status group 0 (all jobs):
   READ: bw=3919MiB/s (4109MB/s), 280MiB/s-314MiB/s (294MB/s-329MB/s),
io=350GiB (376GB), run=81630-91459msec

Before starting Ovirt i measured 7.5GB/sec on the host running the same FIO
command.
are there any optimization i should perform to make sure the vguest i
getting the best possible network performance, im mostly stressed about
GPFS performance as this is our main filesystem.
On the same time running the test on another host not running Ovirt i can
still get the 7.9GB/sec.

appreciate any comments or suggestions.




Kind Regards,


*Guy Brodny*
Cloud Architecture & DevOps Manager | SQream
M: +972-54-2279528
sqream.com  | Linkedin
| Twitter

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3Q5RQFICK3POPK3ADZ3A4WYFD5UR6QY3/


[ovirt-users] Network Performance Issues

2019-02-07 Thread Bryan Sockel
Hi,
 
I currently have a 4 node ovirt cluster running.  Each node is configured with 
an active passive network setup, with each link being 10 GB.  After looking 
over my performance metrics collected via observium.  I am noticing the network 
traffic rarely exceeds 100 MB.  I am noticing this across all four of my 
servers and my 2 storage arrays that are also connected via the same 10 GB 
links.
 
What is the best way to trouble shoot this problem? 
 
Thanks
 
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4KJR6F4QPQRN3XFULCHV7ZDU77LJLEE/


Re: [ovirt-users] Network Performance

2017-05-10 Thread Bryan Sockel
About what i would expect to see:

[  3] local 10.20.101.207 port 43688 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   979 MBytes   820 Mbits/sec


-Original Message-
From: Juan Pablo <pablo.localh...@gmail.com>
To: Bryan Sockel <bryan.soc...@altn.com>
Cc: users <users@ovirt.org>
Date: Wed, 10 May 2017 16:26:44 -0300
Subject: Re: [ovirt-users] Network Performance

ok, so those numbers are not bad. just to check, can you please verify same 
test from x server to vm? (server different than host vm please ).

regards,
JP

2017-05-10 16:21 GMT-03:00 Bryan Sockel <bryan.soc...@altn.com>:
Hi Juan,

Currently we are seeing the lag/delay in the VM's. The slowness comes from 
when we access applications from the network vs. locally.  For instance 
Putty opens quickly when run locally, but when run from the network it may 
take a minute or so to launch.

I am watching the Nload graph on the physical server and not seeing 
any/minimal traffic go out on the vlan the vm is running on.

Currently my bonding options are setup as follows:

BONDING_OPTS='mode=4 miimon=1'

I will be changing the options to:
mode=4 miimon=100 xmit_hash_policy=2



Iperf Host to host before Bonding Options change
[  4] local 10.20.101.181 port 5001 connected with 10.20.101.183 port 54892
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.0 sec  1.09 GBytes   935 Mbits/sec

Host to vm Before Bonding Options Change

[  3] local 10.20.101.207 port 33142 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  23.6 GBytes  20.2 Gbits/sec

Host to Gluster Servers

[  5] local 10.20.101.181 port 5001 connected with 10.20.101.185 port 51588
[  5]  0.0-10.0 sec  1.07 GBytes   915 Mbits/sec
[  4] local 10.20.101.181 port 5001 connected with 10.20.101.187 port 45548
[  4]  0.0-10.0 sec   946 MBytes   790 Mbits/sec


After Bonding Changes:

Host to Host
[  3] local 10.20.101.183 port 57656 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.04 GBytes   897 Mbits/sec

Host to VM

[  3] local 10.20.101.207 port 43686 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  26.2 GBytes  22.5 Gbits/sec

Host to Storage
[  3] local 10.20.101.185 port 51590 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.02 GBytes   876 Mbits/sec


VM was running on the same host i was testing the performance against.


After further testing and investigation i have noticed that Kaspersky AV 
maybe the main factor.  

Thanks


-Original Message-
From: Juan Pablo <pablo.localh...@gmail.com>
To: Bryan Sockel <bryan.soc...@altn.com>
Cc: "users@ovirt.org" <users@ovirt.org>
Date: Wed, 10 May 2017 14:14:13 -0300
Subject: Re: [ovirt-users] Network Performance

Bryan, could you please elaborate your setup? do you see lag on your 
virt-host or in your VM's? what have you tried so far to test? can you 
please run:
"iperf -s" on your server (or where do you see lagg)
and "iperf -c $serveripaddress" replace $serveripaddress with your interface 
IP.
and paste your output?
can you also descrie if this is a layer2 or layer 3 network?

regards,
JP

2017-05-10 13:05 GMT-03:00 Bryan Sockel <bryan.soc...@altn.com>:
I am doing some testing with our current ovirt setup and i am seeing some 
lagging going on when i attempt to launch or access files from a network 
share, or even run windows updates.  

My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached.  Server 
usage is currently low. I have also not setup any additional Network QoS and 
everything else is set to default.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network Performance

2017-05-10 Thread Juan Pablo
ok, so those numbers are not bad. just to check, can you please verify same
test from x server to vm? (server different than host vm please ).

regards,
JP

2017-05-10 16:21 GMT-03:00 Bryan Sockel <bryan.soc...@altn.com>:

> Hi Juan,
>
> Currently we are seeing the lag/delay in the VM's. The slowness comes from
> when we access applications from the network vs. locally.  For instance
> Putty opens quickly when run locally, but when run from the network it may
> take a minute or so to launch.
>
> I am watching the Nload graph on the physical server and not seeing
> any/minimal traffic go out on the vlan the vm is running on.
>
> Currently my bonding options are setup as follows:
>
> BONDING_OPTS='mode=4 miimon=1'
>
> I will be changing the options to:
> mode=4 miimon=100 xmit_hash_policy=2
>
>
>
> Iperf Host to host before Bonding Options change
> [  4] local 10.20.101.181 port 5001 connected with 10.20.101.183 port 54892
> [ ID] Interval   Transfer Bandwidth
> [  4]  0.0-10.0 sec  1.09 GBytes   935 Mbits/sec
>
> Host to vm Before Bonding Options Change
> 
> [  3] local 10.20.101.207 port 33142 connected with 10.20.101.181 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  23.6 GBytes  20.2 Gbits/sec
>
> Host to Gluster Servers
>
> [  5] local 10.20.101.181 port 5001 connected with 10.20.101.185 port 51588
> [  5]  0.0-10.0 sec  1.07 GBytes   915 Mbits/sec
> [  4] local 10.20.101.181 port 5001 connected with 10.20.101.187 port 45548
> [  4]  0.0-10.0 sec   946 MBytes   790 Mbits/sec
>
>
> After Bonding Changes:
>
> Host to Host
> [  3] local 10.20.101.183 port 57656 connected with 10.20.101.181 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  1.04 GBytes   897 Mbits/sec
>
> Host to VM
>
> [  3] local 10.20.101.207 port 43686 connected with 10.20.101.181 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  26.2 GBytes  22.5 Gbits/sec
>
> Host to Storage
> [  3] local 10.20.101.185 port 51590 connected with 10.20.101.181 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  1.02 GBytes   876 Mbits/sec
>
>
> VM was running on the same host i was testing the performance against.
>
>
> After further testing and investigation i have noticed that Kaspersky AV
> maybe the main factor.
>
> Thanks
>
>
>
> -Original Message-
> From: Juan Pablo <pablo.localh...@gmail.com>
> To: Bryan Sockel <bryan.soc...@altn.com>
> Cc: "users@ovirt.org" <users@ovirt.org>
> Date: Wed, 10 May 2017 14:14:13 -0300
> Subject: Re: [ovirt-users] Network Performance
>
> Bryan, could you please elaborate your setup? do you see lag on your
> virt-host or in your VM's? what have you tried so far to test? can you
> please run:
> "iperf -s" on your server (or where do you see lagg)
> and "iperf -c $serveripaddress" replace $serveripaddress with your
> interface IP.
> and paste your output?
> can you also descrie if this is a layer2 or layer 3 network?
>
> regards,
> JP
>
> 2017-05-10 13:05 GMT-03:00 Bryan Sockel <bryan.soc...@altn.com>:
>>
>> I am doing some testing with our current ovirt setup and i am seeing some
>> lagging going on when i attempt to launch or access files from a network
>> share, or even run windows updates.
>>
>> My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached.
>> Server usage is currently low. I have also not setup any additional Network
>> QoS and everything else is set to default.
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network Performance

2017-05-10 Thread Bryan Sockel
Hi Juan,

Currently we are seeing the lag/delay in the VM's. The slowness comes from 
when we access applications from the network vs. locally.  For instance 
Putty opens quickly when run locally, but when run from the network it may 
take a minute or so to launch.

I am watching the Nload graph on the physical server and not seeing 
any/minimal traffic go out on the vlan the vm is running on.

Currently my bonding options are setup as follows:

BONDING_OPTS='mode=4 miimon=1'

I will be changing the options to:
mode=4 miimon=100 xmit_hash_policy=2



Iperf Host to host before Bonding Options change
[  4] local 10.20.101.181 port 5001 connected with 10.20.101.183 port 54892
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.0 sec  1.09 GBytes   935 Mbits/sec

Host to vm Before Bonding Options Change

[  3] local 10.20.101.207 port 33142 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  23.6 GBytes  20.2 Gbits/sec

Host to Gluster Servers

[  5] local 10.20.101.181 port 5001 connected with 10.20.101.185 port 51588
[  5]  0.0-10.0 sec  1.07 GBytes   915 Mbits/sec
[  4] local 10.20.101.181 port 5001 connected with 10.20.101.187 port 45548
[  4]  0.0-10.0 sec   946 MBytes   790 Mbits/sec


After Bonding Changes:

Host to Host
[  3] local 10.20.101.183 port 57656 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.04 GBytes   897 Mbits/sec

Host to VM

[  3] local 10.20.101.207 port 43686 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  26.2 GBytes  22.5 Gbits/sec

Host to Storage
[  3] local 10.20.101.185 port 51590 connected with 10.20.101.181 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.02 GBytes   876 Mbits/sec


VM was running on the same host i was testing the performance against.


After further testing and investigation i have noticed that Kaspersky AV 
maybe the main factor.  

Thanks


-Original Message-
From: Juan Pablo <pablo.localh...@gmail.com>
To: Bryan Sockel <bryan.soc...@altn.com>
Cc: "users@ovirt.org" <users@ovirt.org>
Date: Wed, 10 May 2017 14:14:13 -0300
Subject: Re: [ovirt-users] Network Performance

Bryan, could you please elaborate your setup? do you see lag on your 
virt-host or in your VM's? what have you tried so far to test? can you 
please run:
"iperf -s" on your server (or where do you see lagg)
and "iperf -c $serveripaddress" replace $serveripaddress with your interface 
IP.
and paste your output?
can you also descrie if this is a layer2 or layer 3 network?

regards,
JP

2017-05-10 13:05 GMT-03:00 Bryan Sockel <bryan.soc...@altn.com>:
I am doing some testing with our current ovirt setup and i am seeing some 
lagging going on when i attempt to launch or access files from a network 
share, or even run windows updates.  

My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached.  Server 
usage is currently low. I have also not setup any additional Network QoS and 
everything else is set to default.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network Performance

2017-05-10 Thread Juan Pablo
Bryan, could you please elaborate your setup? do you see lag on your
virt-host or in your VM's? what have you tried so far to test? can you
please run:
"iperf -s" on your server (or where do you see lagg)
and "iperf -c $serveripaddress" replace $serveripaddress with your
interface IP.
and paste your output?
can you also descrie if this is a layer2 or layer 3 network?

regards,
JP

2017-05-10 13:05 GMT-03:00 Bryan Sockel :

> I am doing some testing with our current ovirt setup and i am seeing some
> lagging going on when i attempt to launch or access files from a network
> share, or even run windows updates.
>
> My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached.
> Server usage is currently low. I have also not setup any additional Network
> QoS and everything else is set to default.
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network Performance

2017-05-10 Thread Bryan Sockel
I am doing some testing with our current ovirt setup and i am seeing some 
lagging going on when i attempt to launch or access files from a network 
share, or even run windows updates.  

My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached.  Server 
usage is currently low. I have also not setup any additional Network QoS and 
everything else is set to default.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread mad Engineer
I am running RHEL6.5 as Host and Guest on HP server.
Server has 128G and 48 Core[with HT enabled.]

3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,

Guests:

VM1:
6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM

VM2:
6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM

VM3:
2 VCPU ,no pinning,4G RAM

HOST
host has 10 free CPU+24 HT threads which is not allocated and is available.
Host also runs a small application that is single threaded,that uses ~4G RAM.

Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
dont even use 70% of allocated RAM] also ksm is not running.

Networking:
Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
[This IP is called for accessing application running on host]
All vms use virtio and VHOST is on .

Traffic on virtual machines are ~3MBps and combined traffic on host is ~14MBps

VHOST-pid-of-qemu-process sometimes uses ~35% CPU.


There is no packet loss,drop or latency,but the issue is with the same
setup on Vmware with same sizing of virtual machines,with the only
difference as application running on host has moved to fourth VM.So in
Vmware there are 4 VMs.
Application gives better number ie on KVM that number is 310 and on
vmware it is 570.Application uses UDP to communicate.

I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
is solved)

Thanks for any help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread Martin Pavlík
Hi,

from the top of my head you could try to play with tuned both with guest and 
host

###Install###
 yum install tuned
 /etc/init.d/tuned start
 chkconfig tuned on

###usage###
list the profile:
 tuned-adm list

change your profile:
tuned-adm profile throughput-performance

maybe try to experiment with other profiles.

HTH

Martin Pavlik
RHEV QE

 On 14 Jan 2015, at 12:06, mad Engineer themadengin...@gmail.com wrote:
 
 I am running RHEL6.5 as Host and Guest on HP server.
 Server has 128G and 48 Core[with HT enabled.]
 
 3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,
 
 Guests:
 
 VM1:
 6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM
 
 VM2:
 6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM
 
 VM3:
 2 VCPU ,no pinning,4G RAM
 
 HOST
 host has 10 free CPU+24 HT threads which is not allocated and is available.
 Host also runs a small application that is single threaded,that uses ~4G RAM.
 
 Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
 dont even use 70% of allocated RAM] also ksm is not running.
 
 Networking:
 Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
 [This IP is called for accessing application running on host]
 All vms use virtio and VHOST is on .
 
 Traffic on virtual machines are ~3MBps and combined traffic on host is ~14MBps
 
 VHOST-pid-of-qemu-process sometimes uses ~35% CPU.
 
 
 There is no packet loss,drop or latency,but the issue is with the same
 setup on Vmware with same sizing of virtual machines,with the only
 difference as application running on host has moved to fourth VM.So in
 Vmware there are 4 VMs.
 Application gives better number ie on KVM that number is 310 and on
 vmware it is 570.Application uses UDP to communicate.
 
 I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
 is solved)
 
 Thanks for any help
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread mad Engineer
Thanks Martin,
  How can we see the changes made by tuned?
for virtual guest i see it changes scheduler to deadline.Is there
any way to see what parameters each  profile is going to change

Thanks

On Wed, Jan 14, 2015 at 6:14 PM, Martin Pavlík mpav...@redhat.com wrote:
 Hi,

 from the top of my head you could try to play with tuned both with guest and 
 host

 ###Install###
  yum install tuned
  /etc/init.d/tuned start
  chkconfig tuned on

 ###usage###
 list the profile:
  tuned-adm list

 change your profile:
 tuned-adm profile throughput-performance

 maybe try to experiment with other profiles.

 HTH

 Martin Pavlik
 RHEV QE

 On 14 Jan 2015, at 12:06, mad Engineer themadengin...@gmail.com wrote:

 I am running RHEL6.5 as Host and Guest on HP server.
 Server has 128G and 48 Core[with HT enabled.]

 3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,

 Guests:

 VM1:
 6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM

 VM2:
 6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM

 VM3:
 2 VCPU ,no pinning,4G RAM

 HOST
 host has 10 free CPU+24 HT threads which is not allocated and is available.
 Host also runs a small application that is single threaded,that uses ~4G RAM.

 Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
 dont even use 70% of allocated RAM] also ksm is not running.

 Networking:
 Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
 [This IP is called for accessing application running on host]
 All vms use virtio and VHOST is on .

 Traffic on virtual machines are ~3MBps and combined traffic on host is 
 ~14MBps

 VHOST-pid-of-qemu-process sometimes uses ~35% CPU.


 There is no packet loss,drop or latency,but the issue is with the same
 setup on Vmware with same sizing of virtual machines,with the only
 difference as application running on host has moved to fourth VM.So in
 Vmware there are 4 VMs.
 Application gives better number ie on KVM that number is 310 and on
 vmware it is 570.Application uses UDP to communicate.

 I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
 is solved)

 Thanks for any help
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread Martin Pavlík
Check files under /etc/tune-profiles/

HTH

Martin Pavlik

RHEV QE

 On 14 Jan 2015, at 13:55, mad Engineer themadengin...@gmail.com wrote:
 
 Thanks Martin,
  How can we see the changes made by tuned?
 for virtual guest i see it changes scheduler to deadline.Is there
 any way to see what parameters each  profile is going to change
 
 Thanks
 
 On Wed, Jan 14, 2015 at 6:14 PM, Martin Pavlík mpav...@redhat.com wrote:
 Hi,
 
 from the top of my head you could try to play with tuned both with guest and 
 host
 
 ###Install###
 yum install tuned
 /etc/init.d/tuned start
 chkconfig tuned on
 
 ###usage###
 list the profile:
 tuned-adm list
 
 change your profile:
 tuned-adm profile throughput-performance
 
 maybe try to experiment with other profiles.
 
 HTH
 
 Martin Pavlik
 RHEV QE
 
 On 14 Jan 2015, at 12:06, mad Engineer themadengin...@gmail.com wrote:
 
 I am running RHEL6.5 as Host and Guest on HP server.
 Server has 128G and 48 Core[with HT enabled.]
 
 3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,
 
 Guests:
 
 VM1:
 6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM
 
 VM2:
 6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM
 
 VM3:
 2 VCPU ,no pinning,4G RAM
 
 HOST
 host has 10 free CPU+24 HT threads which is not allocated and is available.
 Host also runs a small application that is single threaded,that uses ~4G 
 RAM.
 
 Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
 dont even use 70% of allocated RAM] also ksm is not running.
 
 Networking:
 Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
 [This IP is called for accessing application running on host]
 All vms use virtio and VHOST is on .
 
 Traffic on virtual machines are ~3MBps and combined traffic on host is 
 ~14MBps
 
 VHOST-pid-of-qemu-process sometimes uses ~35% CPU.
 
 
 There is no packet loss,drop or latency,but the issue is with the same
 setup on Vmware with same sizing of virtual machines,with the only
 difference as application running on host has moved to fourth VM.So in
 Vmware there are 4 VMs.
 Application gives better number ie on KVM that number is 310 and on
 vmware it is 570.Application uses UDP to communicate.
 
 I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
 is solved)
 
 Thanks for any help
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users