On 10/12/2015 06:43 PM, Nico wrote:

Le 2015-10-12 14:04, Nir Soffer a écrit :


Yes, engine will let you use such volume in 3.5 - this is a bug. In 3.6 you will
not be able to use such setup.

replica 2 fails in a very bad way when one brick is down; the
application may get
stale data, and this breaks sanlock. You will be get stuck with spm
that cannot be
stopped and other fun stuff.

You don't want to go in this direction, and we will not be able to support that.

here the last entries of vdsm.log

We need the whole file.

I suggest you file an ovirt bug and attach the full vdsm log file
showing the timeframe of
the error. Probably from the time you created the glusterfs domain.

Nir

Please find the full logs there:

https://94.23.2.63/log_vdsm/vdsm.log

https://94.23.2.63/log_vdsm/

https://94.23.2.63/log_engine/


The engine log looping with "Volume contains apparently corrupt bricks"- is when engine tries to get information from gluster CLI about the volumes and updates its database. These errors do not affect the functioning of the storage domain and running virtual machines, but affect the monitoring/management of the gluster volume from oVirt.

Now to identify the cause of the error - the logs indicate that the gluster's server uuid has either not been updated/ or is different in the engine. Could be one of these scenarios 1. Did you create the cluster with only virt service enabled and later enable gluster service? In this case, the gluster server uuid may not be updated. You will need to put host to maintenance and then activate it to resolve this

2. Did you re-install the gluster server nodes after adding it to oVirt? If this is the case, we need to investigate further how there's a mismatch.




_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to