[ovirt-users] Re: Extended ovirt image disk and now virtual size is < 1 GiB and actual size i +1TB

2023-05-24 Thread arc
It's IBM Spectrum Scale formerly known as GPFS. the limit for files or 
filesystems are way beyond any limits within the linux vm so that is definatly 
not an issue from the GPFS filesystem.

The disks are formatted as xfs within the VM so that limit should be 500TiB 
right? 


the funny thing is that we have 16 disks attached from the storage domain, some 
of them extended perfectly fine. others did not, or atleast they dont seem to 
show it, we got no errors other than the volumes was modified correctly and now 
some of them are shown with a size in ovirt of 0GiB but are ok within the VM, 
not extendable within the VM but the original size is still the same.

if i click on the storage domain and go to Events i see alot of the below 
messages : 


May 23, 2023, 9:37:11 AM
Size of the disk 'b4chl407_T1B2_SPS115' was successfully updated to 0 GB by 
SYSTEM.
678b7d5e-10f1-4cf7-88c5-af8b0d074fcf
oVirt

May 23, 2023, 9:37:11 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
678b7d5e-10f1-4cf7-88c5-af8b0d074fcf
oVirt

May 23, 2023, 9:37:11 AM
Size of the disk 'b4chl407_T1B2_SPS109' was successfully updated to 0 GB by 
SYSTEM.
1fb5d5bb-4484-4e09-8b5c-fc98580f6dee
oVirt

May 23, 2023, 9:37:11 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
1fb5d5bb-4484-4e09-8b5c-fc98580f6dee
oVirt

May 23, 2023, 9:36:34 AM
Size of the disk 'b4chl407_T1B2_SPS110' was successfully updated to 16360 GB by 
SYSTEM.
9e53b06a-c606-4951-bf34-16c7538a297c
oVirt

May 23, 2023, 9:36:34 AM
Size of the disk 'b4chl407_T1B2_SPS113' was successfully updated to 16360 GB by 
SYSTEM.
bd645bed-60fc-402c-9745-f10aac6f283d
oVirt

May 23, 2023, 9:36:34 AM
Size of the disk 'b4chl407_T1B2_SPS109' was successfully updated to 16360 GB by 
SYSTEM.
1fb5d5bb-4484-4e09-8b5c-fc98580f6dee
oVirt

May 23, 2023, 9:36:34 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
1fb5d5bb-4484-4e09-8b5c-fc98580f6dee
oVirt

May 23, 2023, 9:36:34 AM
Size of the disk 'b4chl407_T1B2_SPS115' was successfully updated to 16360 GB by 
SYSTEM.
678b7d5e-10f1-4cf7-88c5-af8b0d074fcf
oVirt

May 23, 2023, 9:36:33 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
678b7d5e-10f1-4cf7-88c5-af8b0d074fcf
oVirt

May 23, 2023, 9:36:33 AM
Size of the disk 'b4chl407_T1B2_SPS114' was successfully updated to 16360 GB by 
SYSTEM.
e027b37a-e59b-411c-8a8c-0b0c494d8eb9
oVirt

May 23, 2023, 9:36:33 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
e027b37a-e59b-411c-8a8c-0b0c494d8eb9
oVirt

May 23, 2023, 9:36:18 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
9e53b06a-c606-4951-bf34-16c7538a297c
oVirt

May 23, 2023, 9:35:48 AM
Size of the disk 'b4chl407_T1B2_SPS106' was successfully updated to 16360 GB by 
SYSTEM.
3e051b49-b02b-43f3-909c-cebbcfbbaeaa
oVirt

May 23, 2023, 9:35:30 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
3bf54430-a381-4941-bb6a-bd574d1d71b7
oVirt

May 23, 2023, 9:35:30 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
042e4005-5ee9-4568-8097-cf7e91f746cf
oVirt

May 23, 2023, 9:35:30 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
6599b34a-2d47-40d8-9b13-32d3fa1b8c58
oVirt

May 23, 2023, 9:35:29 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
98f6e3a0-1f03-4aa5-b477-66defeaa5eb1
oVirt

May 23, 2023, 9:35:00 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
9e53b06a-c606-4951-bf34-16c7538a297c
oVirt

May 23, 2023, 9:35:00 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
bd645bed-60fc-402c-9745-f10aac6f283d
oVirt

May 23, 2023, 9:35:00 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
e027b37a-e59b-411c-8a8c-0b0c494d8eb9
oVirt

May 23, 2023, 9:34:59 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
678b7d5e-10f1-4cf7-88c5-af8b0d074fcf
oVirt

May 23, 2023, 9:34:30 AM
Size of the disk 'b4chl407_T1B2_SPS116' was successfully updated to 0 GB by 
SYSTEM.
6599b34a-2d47-40d8-9b13-32d3fa1b8c58
oVirt

May 23, 2023, 9:34:30 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
3bf54430-a381-4941-bb6a-bd574d1d71b7
oVirt

May 23, 2023, 9:34:30 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
1fb5d5bb-4484-4e09-8b5c-fc98580f6dee
oVirt

May 23, 2023, 9:34:30 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
6599b34a-2d47-40d8-9b13-32d3fa1b8c58
oVirt

May 23, 2023, 9:34:29 AM
Failed to update VM 'b4chl407' with the new volume size. VM restart is 
recommended.
042e4005-5ee9-4568-8097-cf7e91f746cf
oVirt

May 23, 2023, 9:34:01 AM
Size of the disk 

[ovirt-users] Re: Extended ovirt image disk and now virtual size is < 1 GiB and actual size i +1TB

2023-05-23 Thread arc
title should have said "actual size is +10TB" not 1TB.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6A74KE7YJHGCKBBOVWN3TB5OSHKKBNOE/


[ovirt-users] Extended ovirt image disk and now virtual size is < 1 GiB and actual size i +1TB

2023-05-23 Thread arc
Hi,

i extended 16 virtual image disks today by 1000GiB through the GUI and now we 
have a problem with some of them... 9 of them are shown if "edited" that they 
have a size of 0GiB and not 16360GiB as expected?

5 of them are not able to be extended either within the VM. we tried to power 
it off and then on again (not reboot, but actual power off / power on)

any suggestions to get out of this weird mode? disks seems as if they have the 
"old" size within the VM but are shown in oVirt as if they are 0GiB.?

They are relying on a posix compliant fs if that makes any difference, and they 
are thin-provisioned and discard-enabled.

Thanks in advance.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIBM3FUSYBZ3B43UH2KICGREG6LIOZ57/


[ovirt-users] vm has very slow write speed on posix compliant fs disk.

2023-04-19 Thread arc
Hi,

we have noticed that we get around 30MB/s of write speed per vm in oVirt on our 
datastore that is a GPFS filesystem mounted as posix compliant fs. Read speeds 
are around 1.5-3.3GB/s.

We tested directly on the mount from the host cli with some benchmarking tools 
and from the host directly into the gpfs filesystem we get line speeds but from 
vms we dont..

does anybody have some clues to what is going on and what to try? 

Br
Andi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NZMH7BY3YRFIB2GYQ322C62A3J2GXDR/


[ovirt-users] Re: IBM ESS remote filesystem as POSIX compliant fs?

2023-03-31 Thread arc
Just an update on this. i solved it myself.

the solution was the mount options used by oVirt.

in ovirt to mount a gpfs remote filesystem use these options :

Path:
/
VFS Type:
gpfs
Mount Options:
rw,dev=:,ldev=

not entirely sure it the filesystem name has to be the exact same as it is in 
the owning cluster as we have the option to rename it in the remote cluster. my 
filesystem is named the same in both ends so anyone needing to do this have to 
do their own tests.

in terms of the filesystem options when granting the remote cluster access, no 
additional mount options than RW is needed so default is ok.

remember to chown 36:36 / before mounting in oVirt.

Best Regards
Christiansen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2YQOVS4I5T72ETH5XJSZ43TIK6EMRBFG/


[ovirt-users] IBM ESS remote filesystem as POSIX compliant fs?

2023-03-31 Thread arc
Hi all,

We are trying to mount a remote filesystem in oVirt from an IBM ESS3500. but it 
seems to be a little against us.

everyting i try to mount it i get this in supervdsm.log(two different tries):

MainProcess|jsonrpc/7::DEBUG::2023-03-31 
10:55:41,808::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call 
mount with (, 
'/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 
'rw,relatime,dev=essovirt01', 'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/7::DEBUG::2023-03-31 
10:55:41,808::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 
/usr/bin/mount -t gpfs -o rw,relatime,dev=essovirt01 /essovirt01 
/rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/7::DEBUG::2023-03-31 
10:55:41,941::commands::230::root::(execCmd) FAILED:  = b'mount: 
/rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file 
handle.\n';  = 32
MainProcess|jsonrpc/7::ERROR::2023-03-31 
10:55:41,941::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error 
in mount
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in 
wrapper
res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, 
in mount
cgroup=cgroup)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in 
_mount
_runcmd(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in 
_runcmd
raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 
'rw,relatime,dev=essovirt01', '/essovirt01', 
'/rhev/data-center/mnt/_essovirt01'] failed with rc=32 out=b'' err=b'mount: 
/rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file 
handle.\n'
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:48,993::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call 
dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:48,993::commands::137::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-63 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:49,000::commands::82::common.commands::(run) SUCCESS:  = b'';  = 0
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:49,000::supervdsm_server::85::SuperVdsm.ServerCallback::(wrapper) return 
dmsetup_run_status with b'360050764008100e428000223: 0 629145600 
multipath 2 0 1 0 2 1 A 0 1 2 8:192 A 0 0 1 E 0 1 2 8:144 A 0 0 1 
\n360050764008100e428000229: 0 629145600 multipath 2 0 1 0 2 1 A 0 1 2 
8:208 A 0 0 1 E 0 1 2 8:160 A 0 0 1 \n360050764008100e42800022a: 0 
10485760 multipath 2 0 1 0 2 1 A 0 1 2 8:176 A 0 0 1 E 0 1 2 8:224 A 0 0 1 
\n360050764008100e428000260: 0 1048576000 multipath 2 0 1 0 2 1 A 0 2 2 
8:16 A 0 0 1 8:80 A 0 0 1 E 0 2 2 8:48 A 0 0 1 8:112 A 0 0 1 
\n360050764008100e428000261: 0 209715200 multipath 2 0 1 0 2 1 A 0 2 2 
8:64 A 0 0 1 8:128 A 0 0 1 E 0 2 2 8:32 A 0 0 1 8:96 A 0 0 1 
\n360050764008102edd80001ab: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 
65:128 A 0 0 1 E 0 1 2 65:32 A 0 0 1 \n360050764008102f5580001a9: 0 
8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:16 A 0 0 1 E 0 1 2 65:48 A 
 0 0 1 \n3600507640081820ce877: 0 838860800 multipath 2 0 1 0 2 1 A 
0 1 2 8:240 A 0 0 1 E 0 1 2 65:0 A 0 0 1 \n3600507680c800058d484: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:160 A 0 0 1 E 0 1 2 66:80 A 0 0 1 
\n3600507680c800058d485: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 
65:176 A 0 0 1 E 0 1 2 66:96 A 0 0 1 \n3600507680c800058d486: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:112 A 0 0 1 E 0 1 2 65:192 A 0 0 1 
\n3600507680c800058d487: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 
66:128 A 0 0 1 E 0 1 2 65:208 A 0 0 1 \n3600507680c800058d488: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:224 A 0 0 1 E 0 1 2 66:144 A 0 0 1 
\n3600507680c800058d489: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 
66:160 A 0 0 1 E 0 1 2 65:240 A 0 0 1 \n3600507680c800058d48a: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:0 A 0 0 1 E 0 1 2 66:176 A 0 0 1 
\n3600507680c800058d48b: 0 20971520 multipath 2 0 1 0 2 1
  A 0 1 2 66:192 A 0 0 1 E 0 1 2 66:16 A 0 0 1 
\n3600507680c800058d48c: 0 419430400 multipath 2 0 1 0 2 1 A 0 1 2 
65:144 A 0 0 1 E 0 1 2 66:64 A 0 0 1 \n3600507680c800058d4b1: 0 
41943040 multipath 2 0 1 0 2 1 A 0 1 2 66:208 A 0 0 1 E 0 1 2 66:32 A 0 0 1 
\n3600507680c800058d4b2: 0 41943040 multipath 2 0 1 0 2 1 A 0 1 2 
66:48 A 0 0 1 E 0 1 2 66:224 A 0 0 1 \n360050768108100c9d1aa: 0 
8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:64 A 0 0 1 E 0 1 2 65:112 A 0 0 1 
\n360050768108180ca480001a9: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 
65:96 A 0 0 1 E 0 1 2 65:80 A 0 0 1 \n'
MainProcess|jsonrpc/0::DEBUG::2023-03-31 

[ovirt-users] Re: upgrade dependency issues

2021-11-15 Thread arc
Hi,

We are also seeing this issue or something close to it:

Error:
 Problem 1: cannot install the best update candidate for package 
vdsm-4.40.80.6-1.el8.x86_64
  - nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by 
vdsm-4.40.90.4-1.el8.x86_64
 Problem 2: package ovirt-host-dependencies-4.4.9-2.el8.x86_64 requires vdsm >= 
4.40.90, but none of the providers can be installed
  - cannot install the best update candidate for package 
ovirt-host-dependencies-4.4.8-1.el8.x86_64
  - nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by 
vdsm-4.40.90.3-1.el8.x86_64
  - nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by 
vdsm-4.40.90.4-1.el8.x86_64
 Problem 3: package ovirt-host-4.4.9-2.el8.x86_64 requires 
ovirt-host-dependencies = 4.4.9-2.el8, but none of the providers can be 
installed
  - package ovirt-host-dependencies-4.4.9-2.el8.x86_64 requires vdsm >= 
4.40.90, but none of the providers can be installed
  - cannot install the best update candidate for package 
ovirt-host-4.4.8-1.el8.x86_64
  - nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by 
vdsm-4.40.90.3-1.el8.x86_64
  - nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by 
vdsm-4.40.90.4-1.el8.x86_64
 Problem 4: package ovirt-provider-ovn-driver-1.2.34-1.el8.noarch requires 
vdsm, but none of the providers can be installed
  - package vdsm-4.40.80.6-1.el8.x86_64 requires vdsm-http = 4.40.80.6-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.17-1.el8.x86_64 requires vdsm-http = 4.40.17-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.18-1.el8.x86_64 requires vdsm-http = 4.40.18-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.19-1.el8.x86_64 requires vdsm-http = 4.40.19-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.20-1.el8.x86_64 requires vdsm-http = 4.40.20-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.21-1.el8.x86_64 requires vdsm-http = 4.40.21-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.22-1.el8.x86_64 requires vdsm-http = 4.40.22-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-http = 4.40.26.3-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.30-1.el8.x86_64 requires vdsm-http = 4.40.30-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.31-1.el8.x86_64 requires vdsm-http = 4.40.31-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.32-1.el8.x86_64 requires vdsm-http = 4.40.32-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.33-1.el8.x86_64 requires vdsm-http = 4.40.33-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.34-1.el8.x86_64 requires vdsm-http = 4.40.34-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.35-1.el8.x86_64 requires vdsm-http = 4.40.35-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-http = 4.40.35.1-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.36-1.el8.x86_64 requires vdsm-http = 4.40.36-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.37-1.el8.x86_64 requires vdsm-http = 4.40.37-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.38-1.el8.x86_64 requires vdsm-http = 4.40.38-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.39-1.el8.x86_64 requires vdsm-http = 4.40.39-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.40-1.el8.x86_64 requires vdsm-http = 4.40.40-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-http = 4.40.50.8-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-http = 4.40.50.9-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.60.6-1.el8.x86_64 requires vdsm-http = 4.40.60.6-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-http = 4.40.60.7-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.70.6-1.el8.x86_64 requires vdsm-http = 4.40.70.6-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.80.5-1.el8.x86_64 requires vdsm-http = 4.40.80.5-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.16-1.el8.x86_64 requires ovirt-imageio-common = 2.0.6, 
but none of the providers can be installed
  - cannot install both vdsm-http-4.40.90.4-1.el8.noarch and 
vdsm-http-4.40.80.6-1.el8.noarch
  - cannot install both vdsm-http-4.40.90.4-1.el8.noarch and 
vdsm-http-4.40.17-1.el8.noarch
  - cannot install both vdsm-http-4.40.90.4-1.el8.noarch and 
vdsm-http-4.40.18-1.el8.noarch
  - cannot install both vdsm-http-4.40.90.4-1.el8.noarch and 
vdsm-http-4.40.19-1.el8.noarch
  - cannot install both vdsm-http-4.40.90.4-1.el8.noarch and