Hi Vicinius,
Thank you for your kind analysis.
Responses to your questions follow:
- The setting was sync=standard in the original samples.
- I have two 1TB NVME drives striped for the SLOG. You can see the
utilization on NVD1
- The pool is 65 TB is nize, with 4 tb (6%) used today.
On 7/14/22 14:30, Jiří Sléžka wrote:
On 7/14/22 00:34, Strahil Nikolov wrote:
Well... not yet.
Check if the engine detects the volumes and verify again that all
glustereventsd work.
I would even consider restarting the engine, just to be on the safe side.
engine restarted (I also yum
Hi David, as an advise. You should not set sync=disabled on TrueNAS. Doing that
you’re considering every write as async, and if you have a powerless you’ll
have data lost.
There are some conservatives that state that you should do the opposite:
sync=always, which bogs down the performance, but
Go to the UI, select the volume , pres 'Start' and mark the checkbox for
'Force'-fully start .
At least it should update the engine that everything is running .Have you
checked if the checkmark for the Gluster service is available if you set the
Host into maintenance?
Best Regards,Strahil
Once upon a time, David Johnson said:
> Since our cluster is colocated in the power company's data center, if we
> see power loss we have much bigger problems.
Heh, you'd think that... but experience has taught me otherwise.
--
Chris Adams
___
Users
On 7/14/22 16:37, Moritz Baumann wrote:
I had a similar issue.
for me, taking the password from
/etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-grafana-database.conf
(GRAFANA_DB_PASSWORD)
and set that password in postgres for the
user ovirt_engine_history_grafana did the trick.
forgot to mention
ovirt is 4.5.1 on centos8-stream, nodes are ovirt-node-4.5.1
It's an old installation originating from 3.6 initially.
On 7/14/22 08:18, Moritz Baumann wrote:
Hi
I have removed the iso domain of an existing data center, and now I am
unable to create a new iso domain
Hi,
I have oVirt engine 4.4.7 running on dedicated PC (not hosted engine).
After several unsuccessful upgrade attempts of 4.4.7 to 4.4.10 decided to
install clean 4.4.10 and migrate data.
On old engine
engine-backup --scope=all --mode=backup
On new engine
engine-backup --mode=restore
Is it safe to restart libvirtd on hosts with workloads without entering
Maintenance mode ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt
I had a similar issue.
for me, taking the password from
/etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-grafana-database.conf
(GRAFANA_DB_PASSWORD)
and set that password in postgres for the
user ovirt_engine_history_grafana did the trick.
Best
Mo
On 7/14/22 16:28, Andrei Verovski
On 7/14/22 00:34, Strahil Nikolov wrote:
Well... not yet.
Check if the engine detects the volumes and verify again that all
glustereventsd work.
I would even consider restarting the engine, just to be on the safe side.
engine restarted (I also yum updated it before), glustereventsd is
Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
Go to the UI, select the volume , pres 'Start' and mark the checkbox for
'Force'-fully start .
well, it worked :-) Now all bricks are in UP state. In fact from
commandline point of view all volumes were active and all bricks up all
the time.
Hi
I have removed the iso domain of an existing data center, and now I am
unable to create a new iso domain
/var/log/ovirt-engine/engine.log shows:
2022-07-14 08:04:40,684+02 INFO
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
(default task-34)
"KSNull Zero" writes:
> Running oVirt 4.4.5
> VM cannot migrate between hosts.
>
> vdsm.log contains the following error:
> libvirt.libvirtError: operation failed: Failed to connect to remote
> libvirt URI qemu+tls://ovhost01.local/system: authentication failed:
> Failed to verify peer's
14 matches
Mail list logo