[ovirt-users] ovirt 4.2 and hugepages

2018-06-16 Thread Gianluca Cecchi
A part from configuring high performance vm, that gives me no high
availability.
Can I allocate huge pages
using vdsm-hook-hugepages-4.19.31-1.el7.centos.noarch

it seems that after updating an environment to 4.2 I'm not able to start a
VM configured with the custom property

I get

Th ehost x did not satisfy internal filter HugePages because there are
not enough free huge pages to run the VM

But on host I have

# cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
HugePages_Total:   6
HugePages_Free:6
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
#

and the custom property is configured to allocate 8200 huge pages to the
guest

The gropu configured in sysctl.d/10-hugepages.conf is

vm.hugetlb_shm_group = 36

Do I have to set anything in /etc/securily/limits.d/ for qemu user?

Thanks In advance,

Gainluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A5HNSDH6GEAXQ4X4QFIY3VVWCYWSWCKX/


[ovirt-users] Gluster not synicng changes between nodes for engine

2018-06-16 Thread Hanson Turner

Hi Guys,

I've got 60 some odd files for each of the nodes in the cluster, they 
don't seem to be syncing.


Running a volume heal engine full, reports successful. Running volume 
heal engine info reports the same files, and doesn't seem to be syncing.


Running a volume heal engine info split-brain, there's nothing listed in 
split-brain.


Peers show as connected. Gluster volumes are started/up.

Hosted-engine --vm-status reports :
The hosted engine configuration has not been retrieved from shared 
storage. Please ensure that ovirt-ha-agent is running and the storage 
server is reachable.


This is leaving the cluster in an engine down with all vm's down state...

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPNWM222K2U7NX32CIME7KINWPCLBSCR/


[ovirt-users] ovirt upgrade 4.1 -> 4.2: host bricks down

2018-06-16 Thread Alex K
Hi all,

I have a ovirt 2 node cluster for testing with self hosted engine on top
gluster.

The cluster was running on 4.1. After the upgrade to 4.2, which generally
went smoothly, I am seeing that the bricks of one of the hosts (v1) are
detected as down, while the gluster is ok when checked with command lines
and all volumes mounted.

Below is the error that the engine logs:

2018-06-17 00:21:26,309+03 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2)
[98d7e79] Error while refreshing brick statuses for volume 'vms' of cluster
'test': null
2018-06-17 00:21:26,318+03 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
(DefaultQuartzScheduler2) [98d7e79] Command
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v0.test-group.com,
VdsIdVDSCommandParametersBase:{hostId='d5a96118-ca49-411f-86cb-280c7f9c421f'})'
execution failed: null
2018-06-17 00:21:26,323+03 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
(DefaultQuartzScheduler2) [98d7e79] Command
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v1.test-group.com,
VdsIdVDSCommandParametersBase:{hostId='12dfea4a-8142-484e-b912-0cbd5f281aba'})'
execution failed: null
2018-06-17 00:21:27,015+03 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler9)
[426e7c3d] Failed to acquire lock and wait lock
'EngineLock:{exclusiveLocks='[0002-0002-0002-0002-017a=GLUSTER]',
sharedLocks=''}'
2018-06-17 00:21:27,926+03 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2)
[98d7e79] Error while refreshing brick statuses for volume 'engine' of
cluster 'test': null

Apart from this everything else is operating normally and VMs are running
on both hosts.

Any idea to isolate this issue?

Thanx,
Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J3MQD5KRVIRHFCD3I54P5PHCQCCZ3ETG/


[ovirt-users] Re: RHEL 7.5 as an oVirt host - missing packages

2018-06-16 Thread jasonmicron
Sorry, I hate to be "that guy", but does no one here seem to care that both the 
repositories and documentation are wrong for installing oVirt on RHEL 7.5?  
Either the documentation is wrong, or the RHEL repos require updating.  

One or the other is true, and I don't know enough about the oVirt support 
structure to know where to submit the appropriate bug report.  I'm happy to do 
so, but I need input from the community here.

I went ahead and installed the Satellite repo, but there is also an install 
problem with the gluster-server package - it never installs.  I had to manually 
install it and manually launch the daemon and enable it (glusterd).

To see this yourself:

1) Have an existing functional oVirt environment
2) Install RHEL 7.5 onto a new host from an ISO and subscribe it to Red Hat's 
repos
3) Configure the rhel-7-server-rpms, rhel-7-server-optional-rpms and 
rhel-7-server-extras-rpms repos
4) Try to add the new host to the oVirt cluster
5) See the failure
6) Configure the rhel-7-server-satellite-tools-6.3-rpms repo on the RHEL host
7) Try to add the host again
8) See it fail again
9) Install the gluster-server package onto the RHEL 7.5 host
10) Enable the glusterd service and start it
11) Try YET AGAIN to add the RHEL 7.5 host to the cluster
12) Success

So I say again - either the documentation is wrong, or Red Hat's repos require 
an update (specifically the Extras repo).
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IH55UNV3YVFCT6YHSRTC6A4FAP2VAW6T/


[ovirt-users] Re: Moving from thin to preallocated storage domains

2018-06-16 Thread Idan Shaby
Hi Oliver,

If you change your array's LUNs' to pre-allocated, this will probably
disable their discard support.
Therefore, after you switch the relevant domains to maintenance, you will
indeed need to disable their Discard After Delete flag.
Also, you will need to disable the Enable Discard flag for all of the disks
that reside on the relevant storage domains before you activate those
domains.
If you forget to disable one of these two options, you will get a warning
on the engine's log telling you that the LUNs' new properties were updated
in the DB but they caused the storage domain   to stop supporting
the specific discard property that you forgot to take care of.

If you got any further questions, please don't hesitate to ask.


Regards,
Idan

On Thu, Jun 14, 2018 at 4:32 PM, Benny Zlotnik  wrote:

> Adding Idan
>
> On Wed, Jun 13, 2018 at 6:57 PM, Bruckner, Simone <
> simone.bruck...@fabasoft.com> wrote:
>
>> Hi,
>>
>>
>>
>>   I have defined thin LUNs on the array and presented them to the oVirt
>> hosts. I will change the LUN from thin to preallocated on the array (which
>> is transparent to the oVirt host).
>>
>>
>>
>> Besides removing “discard after delete” from the storage domain flags, is
>> there anything else I need to take care of on the oVirt side?
>>
>>
>>
>> All the best,
>>
>> Oliver
>>
>>
>>
>> *Von:* Benny Zlotnik 
>> *Gesendet:* Mittwoch, 13. Juni 2018 17:32
>> *An:* Albl, Oliver 
>> *Cc:* users@ovirt.org
>> *Betreff:* [ovirt-users] Re: Moving from thin to preallocated storage
>> domains
>>
>>
>>
>> Hi,
>>
>>
>>
>> What do you mean by converting the LUN from thin to preallocated?
>>
>> oVirt creates LVs on top of the LUNs you provide
>>
>>
>>
>> On Wed, Jun 13, 2018 at 2:05 PM, Albl, Oliver 
>> wrote:
>>
>> Hi all,
>>
>>
>>
>>   I have to move some FC storage domains from thin to preallocated. I
>> would set the storage domain to maintenance, convert the LUN from thin to
>> preallocated on the array, remove “Discard After Delete” from the advanced
>> settings of the storage domain and active it again. Is there anything else
>> I need to take care of?
>>
>>
>>
>> All the best,
>>
>> Oliver
>>
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/communit
>> y/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archiv
>> es/list/users@ovirt.org/message/VUEQY5DHUC633US5HZQO3N2IQ2TVCZPX/
>>
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A42AAER6BIN7Z23YDH6EGXKENYLB653P/