[ovirt-users] Re: change mount point for hosted_storage

2022-03-25 Thread tpike
I have resolved this issue. I'll document what I did here in case someone else 
finds this through Google or something. I basically found this note on this 
form (I believe) which outlined the steps:
The following procedure should provide the solution:

1. set the storage domain to maintenance (via webadmin UI, for example)
2. copy/sync the contents of the storage domain including the metadata, to 
ensure that data in both locations (the old and the new mount points) is the 
same.
3. run modification query on ovirt engine database (please replace the values 
'yournewmountpoint' and 'therelevantconnectionid' with the correct ones:
UPDATE storage_server_connections 
SET connection='yournewmountpoint' 
WHERE id='therelevantconnectionid';
4. There is a bug related to storage domain caching in VDSM (host) , so it 
needs to be workarounded by restarting vdsm (service name is 'vdsmd')
5. activate storage domain (via webadmin UI, for example).

So, since this was the storage domain that had the hosted_storage on it, I had 
to do more steps for safety:

 - stop all VMs
 - shutdown all hosts except for the one running hosted-engine
 - enter global maintenance
 - log into the hosted-engine VM
 - systemctl stop ovirt-engine
 - use psql as above to edit the connection information for the storage domain 
I was interested in
 - reboot the host running the hosted-engine
 - exit global maintenance
 - reboot all the other hosts and bring them back into the cluster
 - restart all VMs

It certainly is possible that this global shutdown wasn't totally necessary, 
but this cluster isn't in production yet and I thought this was the safest 
course of action.

Tod Pike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SCSEX62AU3QOE7KS5DHXNM72B4H5MWBS/


[ovirt-users] change mount point for hosted_storage

2022-03-23 Thread tpike
Hello:
  I have an oVirt 4.4.10 cluster that is working fine. All storage is on NFS. I 
would like to change the mount point for the hosted_storage domain from 
localhost:/... to /... This will be the same physical volume, all I want to 
do is not run my NFS mounts through my local hosts but instead mount directly 
from the NFS server.

  I have used the "hosted-engine --set-shared-config storage" command to change 
the mount point for the storage. Looking at the hosted-engine.conf file 
confirms that the new path is set correctly. When I look at the storage inside 
the hosted-engine, however, it still shows the old value. How can I get the 
cluster to use the new path instead of the old path? I changed it both using 
he_local and he_shared keys. Thanks!

Tod Pike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BW5FHI6UAJ64BPG4INRLGEEZNGZOUJKL/


[ovirt-users] Re: snapshot create fails - files not owned by vdsm:kvm

2021-12-15 Thread tpike
Liran:
  We are using libvirt 4.5.0-23, which is the version mentioned in that bug 
report. Since this is glusterfs, we don't have a root squash option, so I 
suppose the only recourse is to update to a newer version of oVirt, No?

Thanks!

Tod Pike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLIZZL4IXFG2VPJBLK2RDR4RYW47CTYR/


[ovirt-users] snapshot create fails - files not owned by vdsm:kvm

2021-12-14 Thread tpike
I've got an issue with one of my oVirt 4.3.10.4 clusters. OS is Centos 7.7, 
storage is glusterfs. Whenever I try to create a snapshot of any of my VMs, I 
get an (eventual) error:

VDSM track04.yard.psc.edu command HSMGetAllTasksStatusesVDS failed: Could not 
acquire resource. Probably resource factory threw an exception.: ()

After a bit of investigation, I believe that the root cause is that the 
snapshot file created is not owned by vdsm:kvm. Looking at the directories in 
glusterfs, I see that some of the disk images are owned by root:root, some are 
owned by qemu:qemu. In fact, if we watch the directory for the VM, we can 
actually see that the files are owned by vdsm:kvm when they are created, then 
get changed to qemu:qemu, then eventually get changed to being owned by 
root:root. This is entirely repeatable. Needless to say, oVirt can't read the 
disk images when they are owned by root, so that explains why the snapshot is 
failing. The question, then, is why the ownership is getting changed out from 
under the creation process. Checking the gluster volume info shows:

Volume Name: engine
Type: Replicate
Volume ID: 00951055-74b5-463a-84c0-59fa03be7478
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.200.0.131:/gluster_bricks/engine/engine
Brick2: 10.200.0.134:/gluster_bricks/engine/engine
Brick3: 10.200.0.135:/gluster_bricks/engine/engine
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable

I see that the owner UID and GID are correct (vdsm:kvm). Note that snapshots 
worked at one point in the past - I see that there are snapshot images from a 
year ago. Any ideas where to look to correct this? Thanks!

Tod Pike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3RBNTD6RCFRMDZJLDM6KGBK2XITIA54/