Hi Alastair,
This could be a mismatch in the hostname identified in ovirt and gluster.
You could check for any exceptions from GlusterSyncJob in engine.log.
Also, what version of ovirt are you using. And the compatibility version
of your cluster?
On 05/28/2014 12:40 AM, Alastair Neil wrote:
ovirt version is 3.4. I did have a slightly older version of vdsm on
gluster0 but I have updated it and the issue persists. The compatibility
version on the storage cluster is 3.3.
I checked the logs for GlusterSyncJob notifications and there are none.
On 28 May 2014 10:19, Sahina Bose
I just noticed this in the console and I don't know if it is relevant.
When I look at the General tab on the hosts under GlusterFS Version it
shows N/A.
On 28 May 2014 11:03, Alastair Neil ajneil.t...@gmail.com wrote:
ovirt version is 3.4. I did have a slightly older version of vdsm on
On 05/28/2014 08:36 PM, Alastair Neil wrote:
I just noticed this in the console and I don't know if it is relevant.
When I look at the General tab on the hosts under GlusterFS
Version it shows N/A.
That's not related. The GlusterFS version in UI is populated from the
getVdsCaps output from
Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm
log grepping for the volume name vm-store. It seems to indicate the bricks
are ONLINE.
I am uncertain how to extract meaningful information from the engine.log
can you provide some guidance?
Thanks,
Alastair
I just did a rolling upgrade of my gluster storage cluster to the latest
3.5 bits. This all seems to have gone smoothly and all the volumes are on
line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the
vm-store volume with my vm's happily
engine.log and vdsm.log?
This can mostly happen due to following reasons
- gluster volume status vm-store is not consistently returning the
right output
- ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair
7 matches
Mail list logo