Hello,
I don't know if it's normal but in all the nodes of the cluster (except the
one that runs the engine) I have something like:
2022-09-12 15:41:54,563+ INFO (jsonrpc/0) [api.virt] START getStats()
from=::1,57578, vmId=8486ed73-df34-4c58-bfdc-7025dec63b7f (api:48)
2022-09-12
Hello. I did a full backup using veeam but I recorded many errors in the
gluster log.
This is the log (https://cloud.ssis.sm/index.php/s/KRimf5MLXK3Ds3d). The log is
from the same node where veeam-proxy and the backupped VMs resides.
Both are running in the gv1 storage domain.
See that hours
Hello all
I tried to setup Gluster volumes in cockpit using the wizard. Based on
Red Hat's recommendations I wanted to put the Volume for the oVirt
Engine on a thick provisioned logical volume [1] and therefore removed
the line thinpoolname and corresponding configuration from the yml file
3 matches
Mail list logo