Dear Ovirt Users, 
I've followed up the self-hosted-engine upgrade documentation, I upgraded my 
4.1 system to 4.2.3. 
I upgaded the first node with yum upgrade, it seems working now fine. But since 
upgrade, the gluster informations seems to displayed incorrect on the admin 
panel. The volume yellow, and there are red bricks from that node. 
I've checked in console, I think my gluster is not degraded: 

root@n1 ~]# gluster volume list 
volume1 
volume2 
[root@n1 ~]# gluster volume info 
Volume Name: volume1 
Type: Distributed-Replicate 
Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 3 x 3 = 9 
Transport-type: tcp 
Bricks: 
Brick1: 10.104.0.1:/gluster/brick/brick1 
Brick2: 10.104.0.2:/gluster/brick/brick1 
Brick3: 10.104.0.3:/gluster/brick/brick1 
Brick4: 10.104.0.1:/gluster/brick/brick2 
Brick5: 10.104.0.2:/gluster/brick/brick2 
Brick6: 10.104.0.3:/gluster/brick/brick2 
Brick7: 10.104.0.1:/gluster/brick/brick3 
Brick8: 10.104.0.2:/gluster/brick/brick3 
Brick9: 10.104.0.3:/gluster/brick/brick3 
Options Reconfigured: 
transport.address-family: inet 
performance.readdir-ahead: on 
nfs.disable: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.stat-prefetch: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-max-threads: 8 
cluster.shd-wait-qlength: 10000 
features.shard: on 
user.cifs: off 
server.allow-insecure: on 
Volume Name: volume2 
Type: Distributed-Replicate 
Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 3 x 3 = 9 
Transport-type: tcp 
Bricks: 
Brick1: 10.104.0.1:/gluster2/brick/brick1 
Brick2: 10.104.0.2:/gluster2/brick/brick1 
Brick3: 10.104.0.3:/gluster2/brick/brick1 
Brick4: 10.104.0.1:/gluster2/brick/brick2 
Brick5: 10.104.0.2:/gluster2/brick/brick2 
Brick6: 10.104.0.3:/gluster2/brick/brick2 
Brick7: 10.104.0.1:/gluster2/brick/brick3 
Brick8: 10.104.0.2:/gluster2/brick/brick3 
Brick9: 10.104.0.3:/gluster2/brick/brick3 
Options Reconfigured: 
nfs.disable: on 
performance.readdir-ahead: on 
transport.address-family: inet 
cluster.quorum-type: auto 
network.ping-timeout: 10 
auth.allow: * 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.stat-prefetch: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-max-threads: 8 
cluster.shd-wait-qlength: 10000 
features.shard: on 
user.cifs: off 
storage.owner-uid: 36 
storage.owner-gid: 36 
server.allow-insecure: on 
[root@n1 ~]# gluster volume status 
Status of volume: volume1 
Gluster process TCP Port RDMA Port Online Pid 
------------------------------------------------------------------------------ 
Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 
Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 
Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 
Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 
Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 
Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 
Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 
Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 
Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 
Self-heal Daemon on localhost N/A N/A Y 54356 
Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 
Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 
Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 
Task Status of Volume volume1 
------------------------------------------------------------------------------ 
There are no active volume tasks 
Status of volume: volume2 
Gluster process TCP Port RDMA Port Online Pid 
------------------------------------------------------------------------------ 
Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 
Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 
Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 
Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 
Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 
Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 
Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 
Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 
Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 
Self-heal Daemon on localhost N/A N/A Y 54356 
Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 
Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 
Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 
Task Status of Volume volume2 
------------------------------------------------------------------------------ 
There are no active volume tasks 
I think ovirt can't read valid informations about gluster. 
I can't contiune upgrade of other hosts until this problem exist. 

Please help me:) 


Thanks 

Regards, 

Tibor 





_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to