Denis, thx for your answer.
[root@node-gluster203 ~]# gluster volume heal engine info
Brick node-gluster205:/opt/gluster/engine
Status: Connected
Number of entries: 0
Brick node-gluster203:/opt/gluster/engine
Status: Connected
Number of entries: 0
Brick node-gluster201:/opt/gluster/engine
Status: Connected
Number of entries: 0
Gluster volume looks like OK. Adding aditional info about volume options.
[root@node-gluster203 ~]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: aece5318-4126-41f9-977c-9b39300bd0c8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Brick1: node-gluster205:/opt/gluster/engine
Brick2: node-gluster203:/opt/gluster/engine
Brick3: node-gluster201:/opt/gluster/engine (arbiter)
Options Reconfigured:
auth.allow: * 10
server.allow-insecure: on
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off off off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
cluster.enable-shared-storage: enable
11.01.2018, 12:34, "Denis Chaplygin" <>:
On Thu, Jan 11, 2018 at 9:56 AM, Николаев Алексей <> wrote:
We have a self-hosted engine test infra with gluster storage replica 3 arbiter 1 (oVirt 4.1).
Why would not you try 4.2? :-) There is a lot of good changes in that area.
RuntimeError: Volume does not exist: (u'13b5a4d0-dd26-491c-b5c0-5628b56bc3a5',)
I assume you may have an issue with your gluster volume. Could you please share output of the command 'gluster volume heal engine info'?
Thanks in advance.
Users mailing list

Reply via email to