[ovirt-users] Re: Problems after 4.3.8 update

2019-12-14 Thread hunter86_bg
 I don't know. I had the same issues when I migrated my gluster from v6.5 to 
6.6 (currently running v7.0).
Just get the newest file and rsync it to the rest of the bricks. It will solve 
the '?? ??' problem.

Best Regards,
Strahil Nikolov
 В неделя, 15 декември 2019 г., 3:49:27 ч. Гринуич+2, Jayme 
 написа:  
 
 on that page it says to check open bugs and the migration bug you mention does 
not appear to be on the list.  Has it been resolved or is it just missing from 
this page?
On Sat, Dec 14, 2019 at 7:53 PM Strahil Nikolov  wrote:

 Nah... this is not gonna fix your issue and is unnecessary.Just compare the 
data from all bricks ... most probably the 'Last Updated' is different and the 
gfid of the file is different.Find the brick that has the most fresh data, and 
replace (move away as a backup and rsync) the file from last good copy to the 
other bricks.You can also run a 'full heal'.
Best Regards,Strahil Nikolov
В събота, 14 декември 2019 г., 21:18:44 ч. Гринуич+2, Jayme 
 написа:  
 
 *Update* 
Situation has improved.  All VMs and engine are running.  I'm left right now 
with about 2 heal entries in each glusterfs storage volume that will not heal. 
In all cases each heal entry is related to an OVF_STORE image and the problem 
appears to be an issue with the gluster metadata for those ovf_store images.  
When I look at the files shown in gluster volume heal info output I'm seeing 
question marks on the meta files which indicates an attribute/gluster problem 
(even though there is no split-brain).  And I get input/output error when 
attempting to do anything with the files.
If I look at the files on each host in /gluster_bricks they all look fine.  I 
only see question marks on the meta files when look at the file in /rhev mounts
Does anyone know how I can correct the attributes on these OVF_STORE files?  
I've tried putting each host in maintenance and re-activating to re-mount 
gluster volumes.  I've also stopped and started all gluster volumes.  
I'm thinking I might be able to solve this by shutting down all VMs and placing 
all hosts in maintenance and safely restarting the entire cluster.. but that 
may not be necessary?  
On Fri, Dec 13, 2019 at 12:59 AM Jayme  wrote:

I believe I was able to get past this by stopping the engine volume then 
unmounting the glusterfs engine mount on all hosts and re-starting the volume. 
I was able to start hostedengine on host0.
I'm still facing a few problems:
1. I'm still seeing this issue in each host's logs:
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID': 
u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID': 
u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201, message=Volume 
does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable 
to identify the OVF_STORE volume, falling back to initial vm.conf. Please 
ensure you already added your first data domain for regular VMs


2. Most of my gluster volumes still have un-healed entires which I can't seem 
to heal.  I'm not sure what the answer is here.
On Fri, Dec 13, 2019 at 12:33 AM Jayme  wrote:

I was able to get the hosted engine started manually via Virsh after 
re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it down 
and am still having the same problem with ha broker starting.  It appears that 
the problem *might* be with a corrupt ha metadata file, although gluster is not 
stating there is split brain on the engine volume
I'm seeing this:
ls -al 
/rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
ls: cannot access 
/rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
 Input/output error
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
l?? ? ?    ?     ?            ? hosted-engine.metadata

Clearly showing some sort of issue with hosted-engine.metadata on the client 
mount.  
on each node in /gluster_bricks I see this:
# ls -al 
/gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d

[ovirt-users] Re: Problems after 4.3.8 update

2019-12-14 Thread Jayme
on that page it says to check open bugs and the migration bug you mention
does not appear to be on the list.  Has it been resolved or is it just
missing from this page?

On Sat, Dec 14, 2019 at 7:53 PM Strahil Nikolov 
wrote:

> Nah... this is not gonna fix your issue and is unnecessary.
> Just compare the data from all bricks ... most probably the 'Last Updated'
> is different and the gfid of the file is different.
> Find the brick that has the most fresh data, and replace (move away as a
> backup and rsync) the file from last good copy to the other bricks.
> You can also run a 'full heal'.
>
> Best Regards,
> Strahil Nikolov
>
> В събота, 14 декември 2019 г., 21:18:44 ч. Гринуич+2, Jayme <
> jay...@gmail.com> написа:
>
>
> *Update*
>
> Situation has improved.  All VMs and engine are running.  I'm left right
> now with about 2 heal entries in each glusterfs storage volume that will
> not heal.
>
> In all cases each heal entry is related to an OVF_STORE image and the
> problem appears to be an issue with the gluster metadata for those
> ovf_store images.  When I look at the files shown in gluster volume heal
> info output I'm seeing question marks on the meta files which indicates an
> attribute/gluster problem (even though there is no split-brain).  And I get
> input/output error when attempting to do anything with the files.
>
> If I look at the files on each host in /gluster_bricks they all look
> fine.  I only see question marks on the meta files when look at the file in
> /rhev mounts
>
> Does anyone know how I can correct the attributes on these OVF_STORE
> files?  I've tried putting each host in maintenance and re-activating to
> re-mount gluster volumes.  I've also stopped and started all gluster
> volumes.
>
> I'm thinking I might be able to solve this by shutting down all VMs and
> placing all hosts in maintenance and safely restarting the entire cluster..
> but that may not be necessary?
>
> On Fri, Dec 13, 2019 at 12:59 AM Jayme  wrote:
>
> I believe I was able to get past this by stopping the engine volume then
> unmounting the glusterfs engine mount on all hosts and re-starting the
> volume. I was able to start hostedengine on host0.
>
> I'm still facing a few problems:
>
> 1. I'm still seeing this issue in each host's logs:
>
> Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
> {'storagepoolID': '----',
> 'storagedomainID': 'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID':
> u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID':
> u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201,
> message=Volume does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
> Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
> Please ensure you already added your first data domain for regular VMs
>
>
> 2. Most of my gluster volumes still have un-healed entires which I can't
> seem to heal.  I'm not sure what the answer is here.
>
> On Fri, Dec 13, 2019 at 12:33 AM Jayme  wrote:
>
> I was able to get the hosted engine started manually via Virsh after
> re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it
> down and am still having the same problem with ha broker starting.  It
> appears that the problem *might* be with a corrupt ha metadata file,
> although gluster is not stating there is split brain on the engine volume
>
> I'm seeing this:
>
> ls -al
> /rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
> ls: cannot access
> /rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
> Input/output error
> total 0
> drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
> drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
> lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace ->
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
> l?? ? ?? ?? hosted-engine.metadata
>
> Clearly showing some sort of issue with hosted-engine.metadata on the
> client mount.
>
> on each node in /gluster_bricks I see this:
>
> # ls -al
> /gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
> total 0
> drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
> drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
> lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace ->
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
> lrwxrwxrwx. 2 vdsm kvm 132 Dec 12 16:30 hosted-engine.metadata ->
> 

[ovirt-users] Re: Problems after 4.3.8 update

2019-12-14 Thread Strahil Nikolov
 Nah... this is not gonna fix your issue and is unnecessary.Just compare the 
data from all bricks ... most probably the 'Last Updated' is different and the 
gfid of the file is different.Find the brick that has the most fresh data, and 
replace (move away as a backup and rsync) the file from last good copy to the 
other bricks.You can also run a 'full heal'.
Best Regards,Strahil Nikolov
В събота, 14 декември 2019 г., 21:18:44 ч. Гринуич+2, Jayme 
 написа:  
 
 *Update* 
Situation has improved.  All VMs and engine are running.  I'm left right now 
with about 2 heal entries in each glusterfs storage volume that will not heal. 
In all cases each heal entry is related to an OVF_STORE image and the problem 
appears to be an issue with the gluster metadata for those ovf_store images.  
When I look at the files shown in gluster volume heal info output I'm seeing 
question marks on the meta files which indicates an attribute/gluster problem 
(even though there is no split-brain).  And I get input/output error when 
attempting to do anything with the files.
If I look at the files on each host in /gluster_bricks they all look fine.  I 
only see question marks on the meta files when look at the file in /rhev mounts
Does anyone know how I can correct the attributes on these OVF_STORE files?  
I've tried putting each host in maintenance and re-activating to re-mount 
gluster volumes.  I've also stopped and started all gluster volumes.  
I'm thinking I might be able to solve this by shutting down all VMs and placing 
all hosts in maintenance and safely restarting the entire cluster.. but that 
may not be necessary?  
On Fri, Dec 13, 2019 at 12:59 AM Jayme  wrote:

I believe I was able to get past this by stopping the engine volume then 
unmounting the glusterfs engine mount on all hosts and re-starting the volume. 
I was able to start hostedengine on host0.
I'm still facing a few problems:
1. I'm still seeing this issue in each host's logs:
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID': 
u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID': 
u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201, message=Volume 
does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable 
to identify the OVF_STORE volume, falling back to initial vm.conf. Please 
ensure you already added your first data domain for regular VMs


2. Most of my gluster volumes still have un-healed entires which I can't seem 
to heal.  I'm not sure what the answer is here.
On Fri, Dec 13, 2019 at 12:33 AM Jayme  wrote:

I was able to get the hosted engine started manually via Virsh after 
re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it down 
and am still having the same problem with ha broker starting.  It appears that 
the problem *might* be with a corrupt ha metadata file, although gluster is not 
stating there is split brain on the engine volume
I'm seeing this:
ls -al 
/rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
ls: cannot access 
/rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
 Input/output error
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
l?? ? ?    ?     ?            ? hosted-engine.metadata

Clearly showing some sort of issue with hosted-engine.metadata on the client 
mount.  
on each node in /gluster_bricks I see this:
# ls -al 
/gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
lrwxrwxrwx. 2 vdsm kvm 132 Dec 12 16:30 hosted-engine.metadata -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c

 ls -al 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
-rw-rw. 1 vdsm kvm 1073741824 Dec 12 16:48 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c


I'm not sure how to proceed at 

[ovirt-users] Re: Libgfapi considerations

2019-12-14 Thread Strahil Nikolov
 According to GlusterFS Storage Domain the feature is not the default as it is 
incompatible with Live Storage Migration.

Best Regards,Strahil Nikolov

В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme 
 написа:  
 
 Are there currently any known issues with using libgfapi in the latest stable 
version of ovirt in hci deployments?  I have recently enabled it and have 
noticed a significant (over 4x) increase in io performance on my vms. I’m 
concerned however since it does not seem to be an ovirt default setting.  Is 
libgfapi considered safe and stable to use in ovirt 4.3 
hci?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MROFS2OS5HLZCUIJUVIJ/


[ovirt-users] NMA nodes

2019-12-14 Thread suporte
Hi, 

My host only shows one NUMA node. It means that I cannot setup a high 
performence VM? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFTHSBIKINIZ2JPHPWINXK53X3AX3LA4/


[ovirt-users] Re: Problems after 4.3.8 update

2019-12-14 Thread Jayme
*Update*

Situation has improved.  All VMs and engine are running.  I'm left right
now with about 2 heal entries in each glusterfs storage volume that will
not heal.

In all cases each heal entry is related to an OVF_STORE image and the
problem appears to be an issue with the gluster metadata for those
ovf_store images.  When I look at the files shown in gluster volume heal
info output I'm seeing question marks on the meta files which indicates an
attribute/gluster problem (even though there is no split-brain).  And I get
input/output error when attempting to do anything with the files.

If I look at the files on each host in /gluster_bricks they all look fine.
I only see question marks on the meta files when look at the file in /rhev
mounts

Does anyone know how I can correct the attributes on these OVF_STORE
files?  I've tried putting each host in maintenance and re-activating to
re-mount gluster volumes.  I've also stopped and started all gluster
volumes.

I'm thinking I might be able to solve this by shutting down all VMs and
placing all hosts in maintenance and safely restarting the entire cluster..
but that may not be necessary?

On Fri, Dec 13, 2019 at 12:59 AM Jayme  wrote:

> I believe I was able to get past this by stopping the engine volume then
> unmounting the glusterfs engine mount on all hosts and re-starting the
> volume. I was able to start hostedengine on host0.
>
> I'm still facing a few problems:
>
> 1. I'm still seeing this issue in each host's logs:
>
> Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
> {'storagepoolID': '----',
> 'storagedomainID': 'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID':
> u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID':
> u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201,
> message=Volume does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
> Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
> Please ensure you already added your first data domain for regular VMs
>
>
> 2. Most of my gluster volumes still have un-healed entires which I can't
> seem to heal.  I'm not sure what the answer is here.
>
> On Fri, Dec 13, 2019 at 12:33 AM Jayme  wrote:
>
>> I was able to get the hosted engine started manually via Virsh after
>> re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it
>> down and am still having the same problem with ha broker starting.  It
>> appears that the problem *might* be with a corrupt ha metadata file,
>> although gluster is not stating there is split brain on the engine volume
>>
>> I'm seeing this:
>>
>> ls -al
>> /rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
>> ls: cannot access
>> /rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
>> Input/output error
>> total 0
>> drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
>> drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
>> lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace ->
>> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
>> l?? ? ?? ?? hosted-engine.metadata
>>
>> Clearly showing some sort of issue with hosted-engine.metadata on the
>> client mount.
>>
>> on each node in /gluster_bricks I see this:
>>
>> # ls -al
>> /gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
>> total 0
>> drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
>> drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
>> lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace ->
>> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
>> lrwxrwxrwx. 2 vdsm kvm 132 Dec 12 16:30 hosted-engine.metadata ->
>> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
>>
>>  ls -al
>> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
>> -rw-rw. 1 vdsm kvm 1073741824 Dec 12 16:48
>> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
>>
>>
>> I'm not sure how to proceed at this point.  Do I have data corruption, a
>> gluster split-brain issue or something else?  Maybe I just need to
>> re-generate metadata for the hosted engine?
>>
>> On Thu, Dec 12, 2019 at 6:36 PM Jayme  wrote:
>>
>>> I'm running a three server HCI.  Up and running on 4.3.7 with no
>>> problems.  Today I updated to 4.3.8.  Engine 

[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-14 Thread Robert Webb
So I did some testing and and removed the "all_squash,anonuid=36,anongid=36", 
set all the image directories to 0755, added libvirt to the kvm group, then 
rebooted.

After doing so, sanlock had no access to the directories and neither did 
libvert. Leaving everything else alone, I changed the the perms to 0760, 
sanlock no longer complained, but libvirtd still complained about file 
permissions.

Next test was to the change file perms to 770 and I got the same error with 
libvertd.

I have not done any linux work for quite a while so please correct me, but if I 
do a "ps aux | grep libvirt" I see the libvritd process running as root. Does 
the libvirt user get invoked only when a script is running? If the daemon is 
only running as root, then would it not be trying to access storage as root at 
this point?

This is my ps list:

root  2898  0.1  0.0 1553860 28580 ?   Ssl  14:45   0:01 
/usr/sbin/libvirtd -listen


Here is what I see in the audit log:

type=VIRT_CONTROL msg=audit(1576336098.295:451): pid=2898 uid=0 auid=4294967295 
ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm 
op=start reason=booted vm="HostedEngine" 
uuid=70679ece-fbe9-4402-b9b0-34bbee9b6e69 vm-pid=-1 exe="/usr/sbin/libvirtd" 
hostname=? addr=? terminal=? res=failed


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHUU6CCXTIACJUGRI5EKL4INMKPLU2N4/


[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-14 Thread Robert Webb
It also appears that sanlock needs AT LEAST rw permissions on the group as rx 
breaks it per logs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5LBJ7AS4NMP42DTNJWPSMGQO67PGOT27/


[ovirt-users] Libgfapi considerations

2019-12-14 Thread Jayme
Are there currently any known issues with using libgfapi in the latest
stable version of ovirt in hci deployments?  I have recently enabled it and
have noticed a significant (over 4x) increase in io performance on my vms.
I’m concerned however since it does not seem to be an ovirt default
setting.  Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/


[ovirt-users] Re: Ovirt OVN help needed

2019-12-14 Thread Strahil Nikolov
 Hi Dominik,
yes I was looking for those settings.
I have added again the external provider , but I guess the mess is even bigger 
as I made some stupid decisions (like removing 2 port groups :) without knowing 
what I'm doing) .Sadly I can't remove all packages on the engine and hosts and 
reinstall them from scratch.
Pip fails to install the openstacksdk (centOS7 is not great for such tasks) on 
the engine and my lack of knowledge in OVN makes it even more difficult.
So the symptoms are that 2 machines can communicate with each other only if 
they are on the same host ,while on separate - no communications is happening.
How I created the network via UI:
1. Networks - new 2. Fill in the name3. Create on external provider4. Network 
Port security -> disabled (even undefined does not work)5.Connect to physical 
network -> ovirtmgmt

I would be happy to learn more about OVN and thus I would like to make it work.
Here is some info from the engine:
[root@engine ~]# ovn-nbctl showswitch 1288ed26-471c-4bc2-8a7d-4531f306f44c 
(ovirt-pxelan-2a88b2e0-d04b-4196-ad50-074501e4ed08)    port 
c1eba112-5eed-4c04-b25c-d3dcfb934546        addresses: ["56:6f:5a:65:00:06"]    
port 8b52ab60-f474-4d51-b258-cb2e0a53c34a        type: localnet        
addresses: ["unknown"]    port b2753040-881b-487a-92a1-9721da749be4        
addresses: ["56:6f:5a:65:00:09"][root@engine ~]# ovn-sbctl showChassis 
"5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"    hostname: "ovirt3.localdomain"    
Encap geneve        ip: "192.168.1.41"        options: {csum="true"}Chassis 
"baa0199e-d1a4-484c-af13-a41bcad19dbc"    hostname: "ovirt1.localdomain"    
Encap geneve        ip: "192.168.1.90"        options: {csum="true"}Chassis 
"25cc77b3-046f-45c5-af0c-ffb2f77d73f1"    hostname: "ovirt2.localdomain"    
Encap geneve        ip: "192.168.1.64"        options: {csum="true"}    
Port_Binding "b2753040-881b-487a-92a1-9721da749be4"    Port_Binding 
"c1eba112-5eed-4c04-b25c-d3dcfb934546"

Is it possible to remove the vNICs , Virtual Network + and recreate the ovn db 
to start over ?I guess the other option is to create a VM that can be used to 
install python openstacksdk and modify via the python script from your previous 
e-mail.

Best Regards,Strahil Nikolov

В петък, 13 декември 2019 г., 10:11:51 ч. Гринуич+2, Dominik Holler 
 написа:  
 
 

On Fri, Dec 13, 2019 at 5:51 AM Strahil  wrote:


Hi Dominik, All,

I've checked 
'https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W6U4XJHNMYMD3WIXDCPGOXLW6DFMCYIM/'
 and the user managed to clear up and start over.

I have removed the ovn-external-provider  from UI, but I forgot to copy the 
data from the fields.

Do you know any refference guide (or any tips & tricks) for adding OVN ?


The ovirt-provider-ovn entity can be added to oVirt Engine as a new provider 
withType: External Network ProviderNetwork Plugin: oVirt Network Provider for 
OVNProvider URL: https://YOUR_ENGINE_FQDNt:9696
Username: admin@internalPassword: admin@interal passwordHost Name: 
YOUR_ENGINE_FQDNAPI Port: 35357API Version: v2.0
Is this the information you need? 

Thanks in advance.


Best Regards,
Strahil Nikolov
On Dec 12, 2019 20:49, Strahil  wrote:


Hi Dominik,

Thanks for the reply.

Sadly the openstack module is missing on the engine and I have to figure it out.

Can't I just undeploy the ovn and then redeploy it back ?

Best Regards,
Strahil Nikolov
On Dec 12, 2019 09:32, Dominik Holler  wrote:

The cleanest way to clean up is to remove all entities on the OpenStack Network 
API on ovirt-provider-ovn, e.g. by something like
https://gist.github.com/dominikholler/19bcdc5f14f42ab5f069086fd2ff5e37#file-list_security_groups-py-L25
This should work, if not, please report a bug.
To bypass the ovirt-provider-ovn, which is not recommended and might end in an 
inconsistent state, you could use ovn-nbctl .


On Thu, Dec 12, 2019 at 3:33 AM Strahil Nikolov  wrote:

Hi Community,
can someone hint me how to get rid of some ports? I just want to 'reset' my ovn 
setup.
Here is what I have so far:
[root@ovirt1 openvswitch]# ovs-vsctl list interface  
_uuid   : be89c214-10e4-4a97-a9eb-1b82bc433a24
admin_state : up
bfd : {}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status    : []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid    : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex  : []
error   : []
external_ids    : {}
ifindex : 35
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current    : []
link_resets : 0
link_speed  : []
link_state  : up
lldp    : {}
mac : []
mac_in_use  : "7a:7d:1d:a7:43:1d"
mtu : []
mtu_request : []
name    : "ovn-25cc77-0"
ofport  : 6
ofport_request  : []
options : {csum="true", key=flow, remote_ip="192.168.1.64"}
other_config    : {}