Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread yayo (j)
2017-07-25 11:31 GMT+02:00 Sahina Bose :

>
>> Other errors on unsync gluster elements still remain... This is a
>> production env, so, there is any chance to subscribe to RH support?
>>
>
> The unsynced entries - did you check for disconnect messages in the mount
> log as suggested by Ravi?
>
>
Hi have provided this (check past mails): * tail -f
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-dvirtgluster\:engine.log*


Is enougth?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread Sahina Bose
On Tue, Jul 25, 2017 at 1:45 PM, yayo (j)  wrote:

> 2017-07-25 7:42 GMT+02:00 Kasturi Narra :
>
>> These errors are because not having glusternw assigned to the correct
>> interface. Once you attach that these errors should go away.  This has
>> nothing to do with the problem you are seeing.
>>
>
> Hi,
>
> You talking  about errors like these?
>
> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
>
>
> How to assign "glusternw (???)" to the correct interface?
>

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
"Storage network" section explains this. Please make sure that gdnode01 is
resolvable from engine.



>
> Other errors on unsync gluster elements still remain... This is a
> production env, so, there is any chance to subscribe to RH support?
>

The unsynced entries - did you check for disconnect messages in the mount
log as suggested by Ravi?

For Red Hat support, the best option is to contact your local Red Hat
representative.


> Thank you
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread Sahina Bose
On Tue, Jul 25, 2017 at 11:12 AM, Kasturi Narra  wrote:

> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away.  This has
> nothing to do with the problem you are seeing.
>
> sahina any idea about engine not showing the correct volume info ?
>

Please provide the vdsm.log (contianing the gluster volume info) and
engine.log


> On Mon, Jul 24, 2017 at 7:30 PM, yayo (j)  wrote:
>
>> Hi,
>>
>> UI refreshed but problem still remain ...
>>
>> No specific error, I've only these errors but I've read that there is no
>> problem if I have this kind of errors:
>>
>>
>> 2017-07-24 15:53:59,823+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] START, GlusterServersListVDSCommand(HostName =
>> node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
>> hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
>> 2017-07-24 15:54:01,066+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] FINISH, GlusterServersListVDSCommand, return: 
>> [10.10.20.80/24:CONNECTED,
>> node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417
>> 2017-07-24 15:54:01,076+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] START, GlusterVolumesListVDSCommand(HostName =
>> node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync=
>> 'true', hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
>> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode01:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,212+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode02:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,215+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode04:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,218+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode01:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,221+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode02:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,224+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode04:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,224+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] FINISH, GlusterVolumesListVDSCommand, return: {d19c19e3-910d
>> -437b-8ba7-4f2a23d17515=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity@fdc91062, c7a5dfc9
>> -3e72-4ea1-843e-c8275d4a7c2d=org.ovirt.engine.core.c
>> ommon.businessentities.gluster.GlusterVolumeEntity@999a6f23}, log id: 7
>> fce25d3
>>
>>
>> Thank you
>>
>>
>> 2017-07-24 8:12 GMT+02:00 Kasturi Narra :
>>
>>> Hi,
>>>
>>>Regarding the UI showing incorrect information about engine and data
>>> volumes, can you please refresh the UI and see if the issue persists  plus
>>> any errors in the engine.log files ?
>>>
>>> Thanks
>>> kasturi
>>>
>>> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N 
>>> wrote:
>>>

 On 07/21/2017 11:41 PM, yayo (j) wrote:

 Hi,

 Sorry for follow up again, but, checking the ovirt interface I've found
 that ovirt report the "engine" volume as an "arbiter" configuration and the
 "data" volume as full replicated volume.