Re: [ovirt-users] Huge pages in guest with newer oVirt versions

2017-09-28 Thread Gianluca Cecchi
Sorry for late reply, I had not time to give feedback until now

On Mon, Sep 18, 2017 at 10:33 PM, Arik Hadas  wrote:

>
>
> On Mon, Sep 18, 2017 at 10:50 PM, Martin Polednik 
> wrote:
>
>> The hugepages are no longer a hook, but part of the code base. They
>
> can be configured via engine property `hugepages`, where the value of
>> property is size of the pages in KiB (1048576 = 1G, 2048 = 2M).
>>
>
> Note that the question is about 4.1 and it doesn't seem like this change
> was backported to the 4.1 branch, right?
>

And in fact it seems I have not this in 4.1.5 engine:

# engine-config -l | grep -i huge
#

In case it is ok for upcoming 4.2/master, how am I supposed to use it? I
would like to use hugepages at VM level, not engine.
Or do you mean that in 4.2 if I set it and specify 2M for the engine
parameter named "hugepages", then automatically I will see a custom
property inside the VM config section, or where?
Any screenshot of this?

In the mean time I'm using the "old" style with the hook I found here:
http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/
vdsm-hook-qemucmdline-4.19.31-1.el7.centos.noarch.rpm
and
vdsm-hook-hugepages-4.19.31-1.el7.centos.noarch.rpm

It works but it seems not to be correctly integrated with what the hosts
sees...
an example
On hypervisor I set 9 huge pages

In 3 VMs I want to configure 34Gb of Huge Pages and total memory of 64Gb,
so I set 17408 in their Huge Pages custom property
Before starting any VM on hypervisor I see

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   9
HugePages_Free:9
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

When I start the first VM there is the first anomaly:
It becomes:
# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   107408
HugePages_Free:74640
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

So apparently it allocates 17408 further huge pages, without using the part
of the 9 it already has free.
But I think this is actually a bug in what /proc shows and not real usage
(see below) perhaps?
Also, it seems it has allocated 64Gb, the entire size of the VM memory and
not only the 34Gb part...
I don't know if this is correct and in case expected... because eventually
I can choose to increase the number of huge pages of the VM..

Inside the VM vm1 itself it seems correct view:
[root@vm1 ~]# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   17408
HugePages_Free:17408
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

Note that if I run again on host:
# sysctl -p /etc/sysctl.d/10-huge-pages.conf

it seems it adjusts itself.. decreasing the total huge pages that in theory
it is not possible...?

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   9
HugePages_Free:57232
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

Again it seems it has allocated 32768 huge pages so 64Gb that is the total
memory of the VM,
I start now the second VM vm2:

At hypervisor level I have now:

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   107408
HugePages_Free:41872
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

So again an increment of 17408 huge pages in the total line and a new
allocation of 64Gb of huge pages (total huge pages allocated 32768+32768)

BTW now the free output on host shows:
# free
  totalusedfree  shared  buff/cache
 available
Mem:  264016436   23310582029194036  190460 1716580
29747272
Swap:   4194300   0 4194300

with "only" 29Gb free and if I try to run the third VM vm3 I get in fact
the error message:

"
Error while executing action:

vm3:

   - Cannot run VM. There is no host that satisfies current scheduling
   constraints. See below for details:
   - The host ovirt1 did not satisfy internal filter Memory because its
   available memory is too low (33948 MB) to run the VM.

"
Again I run on host:
# sysctl -p /etc/sysctl.d/10-huge-pages.conf

The memory situation on host becomes:

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   9
HugePages_Free:24464
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

# free
  totalusedfree  shared  buff/cache
 available
Mem:  264016436   19745474064844616  190460 1717080
65398696
Swap:   4194300   0 4194300
[root@rhevora1 downloaded_from_upstream]#

And I can boot now the third VM vm3, with the memory ouput on host becoming:

# cat /proc/meminfo |grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   107408
HugePages_Free: 9104
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

# free
  totalusedfree  sha

Re: [ovirt-users] Huge pages in guest with newer oVirt versions

2017-09-18 Thread Arik Hadas
On Mon, Sep 18, 2017 at 10:50 PM, Martin Polednik 
wrote:

> The hugepages are no longer a hook, but part of the code base. They

can be configured via engine property `hugepages`, where the value of
> property is size of the pages in KiB (1048576 = 1G, 2048 = 2M).
>

Note that the question is about 4.1 and it doesn't seem like this change
was backported to the 4.1 branch, right?


> Specifying any other size not supported by the architecture (or just
> "1") will use platform's default hugepage size.
>

I wonder if the end-to-end scenario, i.e., using the engine, of this ever
worked.
This feature is broken on the master branch because of the switch to engine
xml and is supposed to be fixed by [1]. However, while testing that fix I
noticed that a check done by the scheduler [2] actually prevents us from
setting a value that is not supported by the host.
Is this part really important?

[1] https://gerrit.ovirt.org/#/c/81860/
[2]
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/HugePagesFilterPolicyUnit.java#L56


>
> On Mon, Sep 18, 2017 at 11:39 AM, Gianluca Cecchi
>  wrote:
> > Hello,
> > I would like to go again in deep with what preliminary tested and already
> > discussed here:
> >
> > http://lists.ovirt.org/pipermail/users/2017-April/081320.html
> >
> > I'm testing an oVirt node in 4.1.6-pre
> >
> > I don't find the vdsm hook for huge pages; doing a search I get these:
> >
> > vdsm-hook-ethtool-options.noarch : Allow setting custom ethtool options
> for
> > vdsm controlled nics
> > vdsm-hook-fcoe.noarch : Hook to enable FCoE support
> > vdsm-hook-openstacknet.noarch : OpenStack Network vNICs support for VDSM
> > vdsm-hook-vfio-mdev.noarch : Hook to enable mdev-capable devices.
> > vdsm-hook-vhostmd.noarch : VDSM hook set for interaction with vhostmd
> > vdsm-hook-vmfex-dev.noarch : VM-FEX vNIC support for VDSM
> >
> > Did anything change between 4.1.1 and 4.1.5/4.1.6?
> >
> > I'm making preliminary tests with an Oracle RDBMS and HammerDB in both a
> > physical server and a "big" VM inside another same hw server configured
> with
> > oVirt.
> >
> > Results are not bad, but I would like to see having huge pages inside the
> > guest how could change results.
> >
> > Just for reference:
> >
> > The 2 Physical server are blades with each one:
> > 2 sockets, each one with 14 cores and ht enabled, so in total 56
> > computational threads
> > 256Gb ram
> > huge pages enabled
> >
> > VM configured with this virtual hw on one of them:
> > 2 sockets, each one with 6 cores and ht so in total 24 computational
> threads
> > 64Gb ram
> > no huge pages at the moment
> >
> > Oracle SGA is 32Gb on both physical rdbms and virtual one.
> >
> > Thanks for any insight to test huge pages in guest
> >
> > Gianluca
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Huge pages in guest with newer oVirt versions

2017-09-18 Thread Martin Polednik
The hugepages are no longer a hook, but part of the code base. They
can be configured via engine property `hugepages`, where the value of
property is size of the pages in KiB (1048576 = 1G, 2048 = 2M).
Specifying any other size not supported by the architecture (or just
"1") will use platform's default hugepage size.

On Mon, Sep 18, 2017 at 11:39 AM, Gianluca Cecchi
 wrote:
> Hello,
> I would like to go again in deep with what preliminary tested and already
> discussed here:
>
> http://lists.ovirt.org/pipermail/users/2017-April/081320.html
>
> I'm testing an oVirt node in 4.1.6-pre
>
> I don't find the vdsm hook for huge pages; doing a search I get these:
>
> vdsm-hook-ethtool-options.noarch : Allow setting custom ethtool options for
> vdsm controlled nics
> vdsm-hook-fcoe.noarch : Hook to enable FCoE support
> vdsm-hook-openstacknet.noarch : OpenStack Network vNICs support for VDSM
> vdsm-hook-vfio-mdev.noarch : Hook to enable mdev-capable devices.
> vdsm-hook-vhostmd.noarch : VDSM hook set for interaction with vhostmd
> vdsm-hook-vmfex-dev.noarch : VM-FEX vNIC support for VDSM
>
> Did anything change between 4.1.1 and 4.1.5/4.1.6?
>
> I'm making preliminary tests with an Oracle RDBMS and HammerDB in both a
> physical server and a "big" VM inside another same hw server configured with
> oVirt.
>
> Results are not bad, but I would like to see having huge pages inside the
> guest how could change results.
>
> Just for reference:
>
> The 2 Physical server are blades with each one:
> 2 sockets, each one with 14 cores and ht enabled, so in total 56
> computational threads
> 256Gb ram
> huge pages enabled
>
> VM configured with this virtual hw on one of them:
> 2 sockets, each one with 6 cores and ht so in total 24 computational threads
> 64Gb ram
> no huge pages at the moment
>
> Oracle SGA is 32Gb on both physical rdbms and virtual one.
>
> Thanks for any insight to test huge pages in guest
>
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Huge pages in guest with newer oVirt versions

2017-09-18 Thread Gianluca Cecchi
Hello,
I would like to go again in deep with what preliminary tested and already
discussed here:

http://lists.ovirt.org/pipermail/users/2017-April/081320.html

I'm testing an oVirt node in 4.1.6-pre

I don't find the vdsm hook for huge pages; doing a search I get these:

vdsm-hook-ethtool-options.noarch : Allow setting custom ethtool options for
vdsm controlled nics
vdsm-hook-fcoe.noarch : Hook to enable FCoE support
vdsm-hook-openstacknet.noarch : OpenStack Network vNICs support for VDSM
vdsm-hook-vfio-mdev.noarch : Hook to enable mdev-capable devices.
vdsm-hook-vhostmd.noarch : VDSM hook set for interaction with vhostmd
vdsm-hook-vmfex-dev.noarch : VM-FEX vNIC support for VDSM

Did anything change between 4.1.1 and 4.1.5/4.1.6?

I'm making preliminary tests with an Oracle RDBMS and HammerDB in both a
physical server and a "big" VM inside another same hw server configured
with oVirt.

Results are not bad, but I would like to see having huge pages inside the
guest how could change results.

Just for reference:

The 2 Physical server are blades with each one:
2 sockets, each one with 14 cores and ht enabled, so in total 56
computational threads
256Gb ram
huge pages enabled

VM configured with this virtual hw on one of them:
2 sockets, each one with 6 cores and ht so in total 24 computational threads
64Gb ram
no huge pages at the moment

Oracle SGA is 32Gb on both physical rdbms and virtual one.

Thanks for any insight to test huge pages in guest

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users