#1 How is it hung? Please provide more details.
#2 The ovirt engine can be moved by creating a backup. See chap 12 of the
admin guide-
https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration.html
___
Users mailing list -- users@ovi
1. How should ovirt-engine be repaired if it is hung up?
2. How do I export the ovirt-engine configuration file?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/
Hello guys,
I am being unable to put a host from a two nodes cluster into maintenance mode
in order to remove it from the cluster afterwards.
This is what I see in engine.log:
2019-09-27 16:20:58,364 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.threa
Yes, I can take the downtime. Actually, I don't have any choice at the moment
because it is a single node setup. :) I think this is a distributed volume
from the research I have performed. I posted the lvchange command in my last
post, this was the result- I ran the command lvchange -an
/de
Can you suffer downtime ?
You can try something like this (I'm improvising):
Set to global maintenance (either via UI or hosted-engine --set-maintenance
--mode=global)
Stop the engine.
Stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd sanlock glusterd.
Stop all gluster processes via thr scri
Hello,
We use engine-iso-uploader with ssh (and not NFS). It was working but
now we have an error.
We tried two commands :
engine-iso-uploader -v --iso-domain=lp-ducharmoy-iso upload
eole-2.6.2.1-alternate-amd64.iso
and
engine-iso-uploader --iso-domain=lp-ducharmoy-iso list
*and the error
Hi All,
I'm getting brlow error when I tried to integrate openstack glance,(image)
into ovirt 4.3.5 manager. Plz help to troubleshoot.
Error: Failed with error PROVIDER_FAILURE and code 5050
Regards
Lakshmanan J
___
Users mailing list -- users@ovirt.or
Dear oVirt users,
Sorry for having bothered you, it appeared the transaction in the database
somehow wasn't commited correctly.
After ensuring that, the mountpoints updated.
Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an ema
Thank you for the reply. Please pardon my ignorance, I'm not very good with
GlusterFS. I don't think this is a replicated volume (though I could be wrong)
I built a single node hyperconverged hypervisor. I was reviewing my gdeploy
file from when I originally built the system. I have the fol
If it's a replicated volume - then you can safely rebuild your bricks and don't
even tryhto repair. There is no guarantee that the issue will not reoccur.
Best Regards,
Strahil NikolovOn Sep 29, 2019 00:22, jeremy_tourvi...@hotmail.com wrote:
>
> I see evidence that appears to be a problem with
10 matches
Mail list logo