This doesn't affect the monitoring of state.
Any errors in vdsm.log?
Or errors in engine.log of the form "Error while refreshing brick statuses
for volume"

On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <[email protected]> wrote:

> Hi,
>
> Thank you for your fast reply :)
>
>
> 2018-05-10 11:01:51,574+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] START, 
> GlusterServersListVDSCommand(HostName
> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8
> 2018-05-10 11:01:51,768+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand,
> return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 
> 10.104.0.3:CONNECTED,
> 10.104.0.4:CONNECTED], log id: 39adbbb8
> 2018-05-10 11:01:51,788+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] START, 
> GlusterVolumesListVDSCommand(HostName
> = n2.itsmart.cloud, GlusterVolumesListVDSParameter
> s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261
> 2018-05-10 11:01:51,892+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-000000000339'
> 2018-05-10 11:01:51,898+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-000000000339'
> 2018-05-10 11:01:51,905+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-000000000339'
> 2018-05-10 11:01:51,911+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-000000000339'
> 2018-05-10 11:01:51,917+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-000000000339'
> 2018-05-10 11:01:51,924+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-000000000339'
> 2018-05-10 11:01:51,925+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand,
> return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d,
> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log
> id: 738a7261
>
>
> This happening continuously.
>
> Thanks!
> Tibor
>
>
>
> ----- 2018. máj.. 10., 10:56, Sahina Bose <[email protected]> írta:
>
> Could you check the engine.log if there are errors related to getting
> GlusterVolumeAdvancedDetails ?
>
> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <[email protected]>
> wrote:
>
>> Dear Ovirt Users,
>> I've followed up the self-hosted-engine upgrade documentation, I upgraded
>> my 4.1 system to 4.2.3.
>> I upgaded the first node with yum upgrade, it seems working now fine. But
>> since upgrade, the gluster informations seems to displayed incorrect on the
>> admin panel. The volume yellow, and there are red bricks from that node.
>> I've checked in console, I think my gluster is not degraded:
>>
>> root@n1 ~]# gluster volume list
>> volume1
>> volume2
>> [root@n1 ~]# gluster volume info
>>
>> Volume Name: volume1
>> Type: Distributed-Replicate
>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 3 x 3 = 9
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.104.0.1:/gluster/brick/brick1
>> Brick2: 10.104.0.2:/gluster/brick/brick1
>> Brick3: 10.104.0.3:/gluster/brick/brick1
>> Brick4: 10.104.0.1:/gluster/brick/brick2
>> Brick5: 10.104.0.2:/gluster/brick/brick2
>> Brick6: 10.104.0.3:/gluster/brick/brick2
>> Brick7: 10.104.0.1:/gluster/brick/brick3
>> Brick8: 10.104.0.2:/gluster/brick/brick3
>> Brick9: 10.104.0.3:/gluster/brick/brick3
>> Options Reconfigured:
>> transport.address-family: inet
>> performance.readdir-ahead: on
>> nfs.disable: on
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> server.allow-insecure: on
>>
>> Volume Name: volume2
>> Type: Distributed-Replicate
>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 3 x 3 = 9
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.104.0.1:/gluster2/brick/brick1
>> Brick2: 10.104.0.2:/gluster2/brick/brick1
>> Brick3: 10.104.0.3:/gluster2/brick/brick1
>> Brick4: 10.104.0.1:/gluster2/brick/brick2
>> Brick5: 10.104.0.2:/gluster2/brick/brick2
>> Brick6: 10.104.0.3:/gluster2/brick/brick2
>> Brick7: 10.104.0.1:/gluster2/brick/brick3
>> Brick8: 10.104.0.2:/gluster2/brick/brick3
>> Brick9: 10.104.0.3:/gluster2/brick/brick3
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> cluster.quorum-type: auto
>> network.ping-timeout: 10
>> auth.allow: *
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> server.allow-insecure: on
>> [root@n1 ~]# gluster volume status
>> Status of volume: volume1
>> Gluster process                             TCP Port  RDMA Port  Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick 10.104.0.1:/gluster/brick/brick1      49152     0          Y
>>  3464
>> Brick 10.104.0.2:/gluster/brick/brick1      49152     0          Y
>>  68937
>> Brick 10.104.0.3:/gluster/brick/brick1      49161     0          Y
>>  94506
>> Brick 10.104.0.1:/gluster/brick/brick2      49153     0          Y
>>  3457
>> Brick 10.104.0.2:/gluster/brick/brick2      49153     0          Y
>>  68943
>> Brick 10.104.0.3:/gluster/brick/brick2      49162     0          Y
>>  94514
>> Brick 10.104.0.1:/gluster/brick/brick3      49154     0          Y
>>  3465
>> Brick 10.104.0.2:/gluster/brick/brick3      49154     0          Y
>>  68949
>> Brick 10.104.0.3:/gluster/brick/brick3      49163     0          Y
>>  94520
>> Self-heal Daemon on localhost               N/A       N/A        Y
>>  54356
>> Self-heal Daemon on 10.104.0.2              N/A       N/A        Y
>>  962
>> Self-heal Daemon on 10.104.0.3              N/A       N/A        Y
>>  108977
>> Self-heal Daemon on 10.104.0.4              N/A       N/A        Y
>>  61603
>>
>> Task Status of Volume volume1
>> ------------------------------------------------------------
>> ------------------
>> There are no active volume tasks
>>
>> Status of volume: volume2
>> Gluster process                             TCP Port  RDMA Port  Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick 10.104.0.1:/gluster2/brick/brick1     49155     0          Y
>>  3852
>> Brick 10.104.0.2:/gluster2/brick/brick1     49158     0          Y
>>  68955
>> Brick 10.104.0.3:/gluster2/brick/brick1     49164     0          Y
>>  94527
>> Brick 10.104.0.1:/gluster2/brick/brick2     49156     0          Y
>>  3851
>> Brick 10.104.0.2:/gluster2/brick/brick2     49159     0          Y
>>  68961
>> Brick 10.104.0.3:/gluster2/brick/brick2     49165     0          Y
>>  94533
>> Brick 10.104.0.1:/gluster2/brick/brick3     49157     0          Y
>>  3883
>> Brick 10.104.0.2:/gluster2/brick/brick3     49160     0          Y
>>  68968
>> Brick 10.104.0.3:/gluster2/brick/brick3     49166     0          Y
>>  94541
>> Self-heal Daemon on localhost               N/A       N/A        Y
>>  54356
>> Self-heal Daemon on 10.104.0.2              N/A       N/A        Y
>>  962
>> Self-heal Daemon on 10.104.0.3              N/A       N/A        Y
>>  108977
>> Self-heal Daemon on 10.104.0.4              N/A       N/A        Y
>>  61603
>>
>> Task Status of Volume volume2
>> ------------------------------------------------------------
>> ------------------
>> There are no active volume tasks
>>
>> I think ovirt can't read valid informations about gluster.
>> I can't contiune upgrade of other hosts until this problem exist.
>>
>> Please help me:)
>>
>>
>> Thanks
>>
>> Regards,
>>
>> Tibor
>>
>>
>> _______________________________________________
>> Users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>>
>>
>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to