[ovirt-users] Re: Import an snapshot of an iSCSI Domain

2022-03-03 Thread Vinícius Ferrão via Users
Hi again, I don’t know if it will be possible to import the storage domain due 
to conflicts with the UUID of the LVM devices. I’ve tried to issue a 
vgimportclone to chance the UUIDs and import the volume but it still does not 
shows up on oVirt.

I don’t know how to mount the iSCSI volume to recover the data. The data is 
there but it’s extremely difficult to get it.

Any ideias?

Thanks.


> On 3 Mar 2022, at 20:56, Vinícius Ferrão  wrote:
> 
> I think I’ve found the root cause, and it’s the LVM inside the iSCSI volume:
> 
> [root@rhvh5 ~]# pvscan 
>  WARNING: Not using device /dev/mapper/36589cfc00db9cf56949c63d338ef for 
> PV fTIrnd-gnz2-dI8i-DesK-vIqs-E1BK-mvxtha.
>  WARNING: PV fTIrnd-gnz2-dI8i-DesK-vIqs-E1BK-mvxtha prefers device 
> /dev/mapper/36589cfc006f6c96763988802912b because device is used by LV.
>  PV /dev/mapper/36589cfc006f6c96763988802912bVG 
> 9377d243-2c18-4620-995f-5fc680e7b4f3   lvm2 [<10.00 TiB / 7.83 TiB free]
>  PV /dev/mapper/36589cfc00a1b985d3908c07e41adVG 
> 650b0003-7eec-4fa5-85ea-c019f6408248   lvm2 [199.62 GiB / <123.88 GiB free]
>  PV /dev/mapper/3600605b00805d8a01c2180fd0d8d8dad3   VG rhvh_rhvh5
>  lvm2 [<277.27 GiB / 54.55 GiB free]
>  Total: 3 [<10.47 TiB] / in use: 3 [<10.47 TiB] / in no VG: 0 [0   ]
> 
> The device that’s not being using is the snapshot. There’s a way to change 
> the ID of the device so I can import the data domain?
> 
> Thanks.
> 
>> On 3 Mar 2022, at 20:21, Vinícius Ferrão via Users  wrote:
>> 
>> Hello,
>> 
>> I need to import an old snapshot of my Data domain but oVirt does not find 
>> the snapshot version when importing on the web interface.
>> 
>> To be clear, I’ve mounted a snapshot on my storage, and exported it on 
>> iSCSI. I was expecting that I could be able to import it on the engine.
>> 
>> On the web interface this Import Pre-Configured Domain finds the relative 
>> IQN but it does not show up as a target.
>> 
>> Any ideas?
>> 
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WEQQHZ46DKQJXHVX5QF4S2UVBYF4URR/
> 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XDKQK32V6E4K3IB7BLY5XOGDNHJBW3L/


[ovirt-users] Re: Import an snapshot of an iSCSI Domain

2022-03-03 Thread Vinícius Ferrão via Users
I think I’ve found the root cause, and it’s the LVM inside the iSCSI volume:

[root@rhvh5 ~]# pvscan 
  WARNING: Not using device /dev/mapper/36589cfc00db9cf56949c63d338ef for 
PV fTIrnd-gnz2-dI8i-DesK-vIqs-E1BK-mvxtha.
  WARNING: PV fTIrnd-gnz2-dI8i-DesK-vIqs-E1BK-mvxtha prefers device 
/dev/mapper/36589cfc006f6c96763988802912b because device is used by LV.
  PV /dev/mapper/36589cfc006f6c96763988802912bVG 
9377d243-2c18-4620-995f-5fc680e7b4f3   lvm2 [<10.00 TiB / 7.83 TiB free]
  PV /dev/mapper/36589cfc00a1b985d3908c07e41adVG 
650b0003-7eec-4fa5-85ea-c019f6408248   lvm2 [199.62 GiB / <123.88 GiB free]
  PV /dev/mapper/3600605b00805d8a01c2180fd0d8d8dad3   VG rhvh_rhvh5 
lvm2 [<277.27 GiB / 54.55 GiB free]
  Total: 3 [<10.47 TiB] / in use: 3 [<10.47 TiB] / in no VG: 0 [0   ]

The device that’s not being using is the snapshot. There’s a way to change the 
ID of the device so I can import the data domain?

Thanks.

> On 3 Mar 2022, at 20:21, Vinícius Ferrão via Users  wrote:
> 
> Hello,
> 
> I need to import an old snapshot of my Data domain but oVirt does not find 
> the snapshot version when importing on the web interface.
> 
> To be clear, I’ve mounted a snapshot on my storage, and exported it on iSCSI. 
> I was expecting that I could be able to import it on the engine.
> 
> On the web interface this Import Pre-Configured Domain finds the relative IQN 
> but it does not show up as a target.
> 
> Any ideas?
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WEQQHZ46DKQJXHVX5QF4S2UVBYF4URR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MU3FOHTMKWSEJ4UERNFOGCVUZIOOC2SR/


[ovirt-users] Import an snapshot of an iSCSI Domain

2022-03-03 Thread Vinícius Ferrão via Users
Hello,

I need to import an old snapshot of my Data domain but oVirt does not find the 
snapshot version when importing on the web interface.

To be clear, I’ve mounted a snapshot on my storage, and exported it on iSCSI. I 
was expecting that I could be able to import it on the engine.

On the web interface this Import Pre-Configured Domain finds the relative IQN 
but it does not show up as a target.

Any ideas?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WEQQHZ46DKQJXHVX5QF4S2UVBYF4URR/


[ovirt-users] Re: Multiple dependencies unresolved

2022-03-03 Thread John Florian
Ok, thanks for the info.  I'm guessing the solution is to make 
ovirt-hosted-engine-setup compatible with ansible > 2.10.0 (or at least 
2.12.2) so that c8s hosts can just upgrade to their newer ansible.  I 
don't see a BZ for this filed against ovirt-hosted-engine-setup, so 
maybe few even know about this issue?


John Florian

On 2022-03-03 15:41, Klaas Demter wrote:


As far as I know there is no solution yet, but the "problem" is not 
with the ovirt-release repos, c8s now ships ansible (in a version that 
is not tested with ovirt), c8s did not do that in past :)


On 3/3/22 20:25, John Florian wrote:


So that explains what happened but we still don't have a solution to 
make this go away, correct?


FTR, I did try `dnf reinstall --disablerepo='*' 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm` but 
things are no different afterwards.


John Florian
On 2022-03-03 11:37, Klaas Demter wrote:


https://lists.ovirt.org/archives/list/users@ovirt.org/message/OANKECW6C4SM6USKJP7DJIAHUN5RALFI/


On 3/3/22 17:20, John Florian wrote:


FWIW, I'm getting what appears to be the same dependency errors on 
my oVirt Hosts that were installed new on C8S not all that long 
ago.  I have to do upgrades manually with `dnf upgrade --no-best` 
otherwise I get:


[root@orthosie ~]# dnf upgrade
Last metadata expiration check: 1:24:59 ago on Thu 03 Mar 2022 
09:52:40 AM EST.

Error:
 Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch 
requires ansible, but none of the providers can be installed
  - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core 
> 2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.17-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.18-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.20-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.21-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.23-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.24-2.el8.noarch
  - cannot install the best update candidate for package 
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
  - cannot install the best update candidate for package 
ansible-2.9.27-2.el8.noarch
  - package ansible-2.9.20-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.16-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.19-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.23-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-1:2.9.27-4.el8.noarch is filtered out by 
exclude filtering
(try to add '--allowerasing' to command line to replace conflicting 
packages or '--skip-broken' to skip uninstallable packages or 
'--nobest' to use not only best candidate packages)

[root@orthosie ~]# dnf repolist
repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
dell-system-update_dependent dell-system-update_dependent
dell-system-update_independent dell-system-update_independent
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-advanced-virtualization CentOS-8 - Advanced 
Virtualization

ovirt-4.4-centos-ceph-pacific CentOS-8-stream - Ceph Pacific
ovirt-4.4-centos-gluster8 CentOS-8-stream - Gluster 8
ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-openstack-victoria CentOS-8 - OpenStack victoria
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-opstools-vault CentOS-8 - OpsTools - collectd - vault
ovirt-4.4-centos-ovirt44 CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr 
repo for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection 
Copr repo for EL8_collection owned by sbonazzo

ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what 
will be shipped in upcoming RHEL

powertools CentOS Stream 8 - PowerTools

John Florian
On 2022-02-21 08:05, Andrea Chierici wrote:

Dear all,
lately the engine started notifying me about some "errors":

"Failed to check for available updates on host XYZ with message 
'Task Ensure Python3 is installed for CentOS/RHEL8 hosts failed to 
execute. Please check logs for more details"


I do understand this is not something that can impact my cluster 
stability, since it's only a matter of checking updates, anyway it 
annoys me a 

[ovirt-users] Re: Multiple dependencies unresolved

2022-03-03 Thread Klaas Demter
As far as I know there is no solution yet, but the "problem" is not with 
the ovirt-release repos, c8s now ships ansible (in a version that is not 
tested with ovirt), c8s did not do that in past :)


On 3/3/22 20:25, John Florian wrote:


So that explains what happened but we still don't have a solution to 
make this go away, correct?


FTR, I did try `dnf reinstall --disablerepo='*' 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm` but 
things are no different afterwards.


John Florian
On 2022-03-03 11:37, Klaas Demter wrote:


https://lists.ovirt.org/archives/list/users@ovirt.org/message/OANKECW6C4SM6USKJP7DJIAHUN5RALFI/


On 3/3/22 17:20, John Florian wrote:


FWIW, I'm getting what appears to be the same dependency errors on 
my oVirt Hosts that were installed new on C8S not all that long 
ago.  I have to do upgrades manually with `dnf upgrade --no-best` 
otherwise I get:


[root@orthosie ~]# dnf upgrade
Last metadata expiration check: 1:24:59 ago on Thu 03 Mar 2022 
09:52:40 AM EST.

Error:
 Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch 
requires ansible, but none of the providers can be installed
  - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core 
> 2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.17-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.18-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.20-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.21-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.23-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.24-2.el8.noarch
  - cannot install the best update candidate for package 
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
  - cannot install the best update candidate for package 
ansible-2.9.27-2.el8.noarch
  - package ansible-2.9.20-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.16-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.19-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.23-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-1:2.9.27-4.el8.noarch is filtered out by exclude 
filtering
(try to add '--allowerasing' to command line to replace conflicting 
packages or '--skip-broken' to skip uninstallable packages or 
'--nobest' to use not only best candidate packages)

[root@orthosie ~]# dnf repolist
repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
dell-system-update_dependent dell-system-update_dependent
dell-system-update_independent dell-system-update_independent
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-advanced-virtualization CentOS-8 - Advanced 
Virtualization

ovirt-4.4-centos-ceph-pacific CentOS-8-stream - Ceph Pacific
ovirt-4.4-centos-gluster8 CentOS-8-stream - Gluster 8
ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-openstack-victoria CentOS-8 - OpenStack victoria
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-opstools-vault CentOS-8 - OpsTools - collectd - vault
ovirt-4.4-centos-ovirt44 CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr 
repo for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection 
Copr repo for EL8_collection owned by sbonazzo

ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what 
will be shipped in upcoming RHEL

powertools CentOS Stream 8 - PowerTools

John Florian
On 2022-02-21 08:05, Andrea Chierici wrote:

Dear all,
lately the engine started notifying me about some "errors":

"Failed to check for available updates on host XYZ with message 
'Task Ensure Python3 is installed for CentOS/RHEL8 hosts failed to 
execute. Please check logs for more details"


I do understand this is not something that can impact my cluster 
stability, since it's only a matter of checking updates, anyway it 
annoys me a lot.
I checked the logs and apparently the issue is related to some 
repos that are missing/unresolved.


Right now on my hosts I have these repos:

ovirt-release44-4.4.8.3-1.el8.noarch
epel-release-8-13.el8.noarch
centos-stream-release-8.6-1.el8.noarch
puppet5-release-5.0.0-5.el8.noarch

The problems come from:
Error: Failed to download metadata for repo 

[ovirt-users] Re: oVirt + TrueNAS: Unable to create iSCSI domain - I am missing something obvious

2022-03-03 Thread David Johnson
I'm looking at a 50 TB system right now, and we have had discussions about
petabyte systems.

*David Johnson*
*Director of Development, Maxis Technology*
844.696.2947 ext 702 (o) | 479.531.3590 (c)




*Follow us:*  


On Thu, Mar 3, 2022 at 1:45 PM Strahil Nikolov 
wrote:

> How much is 'massive amounts of data' ?
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Mar 3, 2022 at 15:58, David Johnson
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZHWXMG45F2IMAM3WCBLUA2W5L5MXHHH/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PHATYFL6EDFNQPIH4J7MTEK7NHZAG3M/


[ovirt-users] Re: oVirt + TrueNAS: Unable to create iSCSI domain - I am missing something obvious

2022-03-03 Thread Strahil Nikolov via Users
How much is 'massive amounts of data' ?
Best Regards,Strahil Nikolov
 
 
  On Thu, Mar 3, 2022 at 15:58, David Johnson 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZHWXMG45F2IMAM3WCBLUA2W5L5MXHHH/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2ACAKV7C4SUYOGMPW65PSC2JPYET4GS5/


[ovirt-users] Re: Multiple dependencies unresolved

2022-03-03 Thread John Florian
So that explains what happened but we still don't have a solution to 
make this go away, correct?


FTR, I did try `dnf reinstall --disablerepo='*' 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm` but things 
are no different afterwards.


John Florian

On 2022-03-03 11:37, Klaas Demter wrote:


https://lists.ovirt.org/archives/list/users@ovirt.org/message/OANKECW6C4SM6USKJP7DJIAHUN5RALFI/


On 3/3/22 17:20, John Florian wrote:


FWIW, I'm getting what appears to be the same dependency errors on my 
oVirt Hosts that were installed new on C8S not all that long ago.  I 
have to do upgrades manually with `dnf upgrade --no-best` otherwise I 
get:


[root@orthosie ~]# dnf upgrade
Last metadata expiration check: 1:24:59 ago on Thu 03 Mar 2022 
09:52:40 AM EST.

Error:
 Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch 
requires ansible, but none of the providers can be installed
  - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core > 
2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.17-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.18-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.20-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.21-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.23-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.24-2.el8.noarch
  - cannot install the best update candidate for package 
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
  - cannot install the best update candidate for package 
ansible-2.9.27-2.el8.noarch
  - package ansible-2.9.20-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.16-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.19-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.23-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-1:2.9.27-4.el8.noarch is filtered out by exclude 
filtering
(try to add '--allowerasing' to command line to replace conflicting 
packages or '--skip-broken' to skip uninstallable packages or 
'--nobest' to use not only best candidate packages)

[root@orthosie ~]# dnf repolist
repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
dell-system-update_dependent dell-system-update_dependent
dell-system-update_independent dell-system-update_independent
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-advanced-virtualization CentOS-8 - Advanced 
Virtualization

ovirt-4.4-centos-ceph-pacific CentOS-8-stream - Ceph Pacific
ovirt-4.4-centos-gluster8 CentOS-8-stream - Gluster 8
ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-openstack-victoria CentOS-8 - OpenStack victoria
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-opstools-vault CentOS-8 - OpsTools - collectd - vault
ovirt-4.4-centos-ovirt44 CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr 
repo for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr 
repo for EL8_collection owned by sbonazzo

ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what 
will be shipped in upcoming RHEL

powertools CentOS Stream 8 - PowerTools

John Florian
On 2022-02-21 08:05, Andrea Chierici wrote:

Dear all,
lately the engine started notifying me about some "errors":

"Failed to check for available updates on host XYZ with message 
'Task Ensure Python3 is installed for CentOS/RHEL8 hosts failed to 
execute. Please check logs for more details"


I do understand this is not something that can impact my cluster 
stability, since it's only a matter of checking updates, anyway it 
annoys me a lot.
I checked the logs and apparently the issue is related to some repos 
that are missing/unresolved.


Right now on my hosts I have these repos:

ovirt-release44-4.4.8.3-1.el8.noarch
epel-release-8-13.el8.noarch
centos-stream-release-8.6-1.el8.noarch
puppet5-release-5.0.0-5.el8.noarch

The problems come from:
Error: Failed to download metadata for repo 
'ovirt-4.4-centos-gluster8': Cannot prepare internal mirrorlist: No 
URLs in mirrorlist
Error: Failed to download metadata for repo 
'ovirt-4.4-centos-opstools': Cannot prepare internal mirrorlist: No 
URLs in mirrorlist
Error: Failed to download metadata 

[ovirt-users] Re: Multiple dependencies unresolved

2022-03-03 Thread Klaas Demter

https://lists.ovirt.org/archives/list/users@ovirt.org/message/OANKECW6C4SM6USKJP7DJIAHUN5RALFI/


On 3/3/22 17:20, John Florian wrote:


FWIW, I'm getting what appears to be the same dependency errors on my 
oVirt Hosts that were installed new on C8S not all that long ago.  I 
have to do upgrades manually with `dnf upgrade --no-best` otherwise I get:


[root@orthosie ~]# dnf upgrade
Last metadata expiration check: 1:24:59 ago on Thu 03 Mar 2022 
09:52:40 AM EST.

Error:
 Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires 
ansible, but none of the providers can be installed
  - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core > 
2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.27-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.17-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.18-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.20-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.21-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.23-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 
2.10.0 provided by ansible-2.9.24-2.el8.noarch
  - cannot install the best update candidate for package 
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
  - cannot install the best update candidate for package 
ansible-2.9.27-2.el8.noarch
  - package ansible-2.9.20-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.16-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.19-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.23-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-1:2.9.27-4.el8.noarch is filtered out by exclude 
filtering
(try to add '--allowerasing' to command line to replace conflicting 
packages or '--skip-broken' to skip uninstallable packages or 
'--nobest' to use not only best candidate packages)

[root@orthosie ~]# dnf repolist
repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
dell-system-update_dependent dell-system-update_dependent
dell-system-update_independent dell-system-update_independent
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-advanced-virtualization CentOS-8 - Advanced 
Virtualization

ovirt-4.4-centos-ceph-pacific CentOS-8-stream - Ceph Pacific
ovirt-4.4-centos-gluster8 CentOS-8-stream - Gluster 8
ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-openstack-victoria CentOS-8 - OpenStack victoria
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-opstools-vault CentOS-8 - OpsTools - collectd - vault
ovirt-4.4-centos-ovirt44 CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo 
for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr 
repo for EL8_collection owned by sbonazzo

ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what 
will be shipped in upcoming RHEL

powertools CentOS Stream 8 - PowerTools

John Florian
On 2022-02-21 08:05, Andrea Chierici wrote:

Dear all,
lately the engine started notifying me about some "errors":

"Failed to check for available updates on host XYZ with message 'Task 
Ensure Python3 is installed for CentOS/RHEL8 hosts failed to execute. 
Please check logs for more details"


I do understand this is not something that can impact my cluster 
stability, since it's only a matter of checking updates, anyway it 
annoys me a lot.
I checked the logs and apparently the issue is related to some repos 
that are missing/unresolved.


Right now on my hosts I have these repos:

ovirt-release44-4.4.8.3-1.el8.noarch
epel-release-8-13.el8.noarch
centos-stream-release-8.6-1.el8.noarch
puppet5-release-5.0.0-5.el8.noarch

The problems come from:
Error: Failed to download metadata for repo 
'ovirt-4.4-centos-gluster8': Cannot prepare internal mirrorlist: No 
URLs in mirrorlist
Error: Failed to download metadata for repo 
'ovirt-4.4-centos-opstools': Cannot prepare internal mirrorlist: No 
URLs in mirrorlist
Error: Failed to download metadata for repo 
'ovirt-4.4-openstack-victoria': Cannot download repomd.xml: Cannot 
download repodata/repomd.xml: All mirrors were tried


If I disable these repos "yum update" can finish but then I get a 
large number of unresolved dependencies and "problems":

Error:
 Problem 1: package 

[ovirt-users] Re: Multiple dependencies unresolved

2022-03-03 Thread John Florian
FWIW, I'm getting what appears to be the same dependency errors on my 
oVirt Hosts that were installed new on C8S not all that long ago.  I 
have to do upgrades manually with `dnf upgrade --no-best` otherwise I get:


[root@orthosie ~]# dnf upgrade
Last metadata expiration check: 1:24:59 ago on Thu 03 Mar 2022 09:52:40 
AM EST.

Error:
 Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires 
ansible, but none of the providers can be installed
  - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core > 
2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.27-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.27-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.17-1.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.18-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.20-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.21-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.23-2.el8.noarch
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0 
provided by ansible-2.9.24-2.el8.noarch
  - cannot install the best update candidate for package 
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
  - cannot install the best update candidate for package 
ansible-2.9.27-2.el8.noarch
  - package ansible-2.9.20-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.16-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.19-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-2.9.23-1.el8.noarch is filtered out by exclude 
filtering
  - package ansible-1:2.9.27-4.el8.noarch is filtered out by exclude 
filtering
(try to add '--allowerasing' to command line to replace conflicting 
packages or '--skip-broken' to skip uninstallable packages or '--nobest' 
to use not only best candidate packages)

[root@orthosie ~]# dnf repolist
repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
dell-system-update_dependent dell-system-update_dependent
dell-system-update_independent dell-system-update_independent
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-advanced-virtualization CentOS-8 - Advanced Virtualization
ovirt-4.4-centos-ceph-pacific CentOS-8-stream - Ceph Pacific
ovirt-4.4-centos-gluster8 CentOS-8-stream - Gluster 8
ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-openstack-victoria CentOS-8 - OpenStack victoria
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-opstools-vault CentOS-8 - OpsTools - collectd - vault
ovirt-4.4-centos-ovirt44 CentOS-8 - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo 
for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr 
repo for EL8_collection owned by sbonazzo

ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what will 
be shipped in upcoming RHEL

powertools CentOS Stream 8 - PowerTools

John Florian

On 2022-02-21 08:05, Andrea Chierici wrote:

Dear all,
lately the engine started notifying me about some "errors":

"Failed to check for available updates on host XYZ with message 'Task 
Ensure Python3 is installed for CentOS/RHEL8 hosts failed to execute. 
Please check logs for more details"


I do understand this is not something that can impact my cluster 
stability, since it's only a matter of checking updates, anyway it 
annoys me a lot.
I checked the logs and apparently the issue is related to some repos 
that are missing/unresolved.


Right now on my hosts I have these repos:

ovirt-release44-4.4.8.3-1.el8.noarch
epel-release-8-13.el8.noarch
centos-stream-release-8.6-1.el8.noarch
puppet5-release-5.0.0-5.el8.noarch

The problems come from:
Error: Failed to download metadata for repo 
'ovirt-4.4-centos-gluster8': Cannot prepare internal mirrorlist: No 
URLs in mirrorlist
Error: Failed to download metadata for repo 
'ovirt-4.4-centos-opstools': Cannot prepare internal mirrorlist: No 
URLs in mirrorlist
Error: Failed to download metadata for repo 
'ovirt-4.4-openstack-victoria': Cannot download repomd.xml: Cannot 
download repodata/repomd.xml: All mirrors were tried


If I disable these repos "yum update" can finish but then I get a 
large number of unresolved dependencies and "problems":

Error:
 Problem 1: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch 
requires ansible, but none of the providers can be installed
  - package ansible-core-2.12.2-2.el8.x86_64 obsoletes 

[ovirt-users] Re: [External] : Does memory ballooning works if memory overcommit is disabled in the cluster

2022-03-03 Thread Marcos Sungaila
Hi Sohail,

Memory ballooning is enabled by checking the "Enable Memory Balloon 
Optimization" checkbox, which can keep VMs' performance.
If left unchecked, any pressure on host memory may trigger swap usage inside 
VMs.
You can leave it unchecked if you have more memory than assigned to all VMs.
Consider that memory ballooning and KSM are only used when a host reaches a 
certain amount of memory use. They are designed to keep your hosts' and VMs' 
performance, avoiding bottlenecks.
Even not setting an optimization policy (overcommit) for Servers or Desktops, 
enabling Memory Ballooning and KSM can keep your environment performance at its 
best.

Regards,
Marcos

-Original Message-
From: sohail_akht...@hotmail.com  
Sent: quinta-feira, 3 de março de 2022 09:34
To: users@ovirt.org
Subject: [External] : [ovirt-users] Does memory ballooning works if memory 
overcommit is disabled in the cluster

Hi All,

We have one ovirt 4.4 environment running. In Cluster Optimization settings we 
have checked  "None - Disable memory overcommit" Option in Memory Optimization. 
But Memory Balloon check box option is enabled. My understanding is that 
Ballooning only works when memory overcommit is enabled. If it is true then 
this check box should be disabled when we are not overcommitting memory. Or 
Still memory ballooning works even if we disabled memory overcommit. According 
to below link ballooning works when memory overcommit is enabled.

https://urldefense.com/v3/__https://lists.ovirt.org/pipermail/users/2017-October/084675.html__;!!ACWV5N9M2RV99hQ!btUtC1gE0b4bMk2jxdEm7z2A3ZElALd5bIiE6ssHaNJIljGQPS87bMindU3fWrU0Pfk$
 

Please let me know if any further information is required.

Many thanks. 

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!!ACWV5N9M2RV99hQ!btUtC1gE0b4bMk2jxdEm7z2A3ZElALd5bIiE6ssHaNJIljGQPS87bMindU3f_uRX9tw$
oVirt Code of Conduct: 
https://urldefense.com/v3/__https://www.ovirt.org/community/about/community-guidelines/__;!!ACWV5N9M2RV99hQ!btUtC1gE0b4bMk2jxdEm7z2A3ZElALd5bIiE6ssHaNJIljGQPS87bMindU3f91hYbZ0$
List Archives: 
https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQS3IIZN47MMMWE3DJLTKLPHMKHNRKWC/__;!!ACWV5N9M2RV99hQ!btUtC1gE0b4bMk2jxdEm7z2A3ZElALd5bIiE6ssHaNJIljGQPS87bMindU3fRHn-XmY$
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSEKZ36ZPJUXRS5QWF7JPRM7OVBV6EDF/


[ovirt-users] Re: oVirt + TrueNAS: Unable to create iSCSI domain - I am missing something obvious

2022-03-03 Thread David Johnson
Thank you both. This is certainly good information.

If I'm going to jump to a different backing store, now it's the time to do
it. But I'm not going to jump simply for the sake of jumping.

The nfs issue was reproduced at redhat on a storage appliance other than
truenas (QNAP I believe), so I am leaning away from nfs, at least until I
see something that indicates this has been fixed.

As I read these notes, and others on a TrueNAS forum, it is beginning to
look like truenas may not necessarily be the best choice for a backing
store for ovirt in my environment. TrueNAS iSCSI (more specifically the
underlying ZFS) struggles with heavy write usage with small block sizes,
and the entire intent of my cluster is to write massive amounts of data as
fast as possible.

My storage server is a superMicro. Today it runs TrueNAS, but I can change
that if the benefit is there. I note that OpenStack prefers Cinder.

More research coming ...

On Wed, Mar 2, 2022, 9:28 PM Vinícius Ferrão 
wrote:

> David do yourself a favor a move away from NFS on TrueNAS for VM hosting.
>
> As a personal experience hosting VMs on NFS may cause your entire
> infrastructure to be down if you change something on TrueNAS, even adding a
> new NFS share may trigger a NFS server restart and suddenly all your VMs
> will be trashed. Emphasis on _may_.
>
> I’ve been using the product since FreeNAS 8, which was 2012 and that’s
> observed behavior.
>
> Also oVirt has its quirks with iSCSI, mainly on MPIO (Multipath I/O) but
> as for the combination with TrueNAS just stick with iSCSI.
>
> Sent from my iPhone
>
> On 3 Mar 2022, at 00:02, David Johnson 
> wrote:
>
> 
> The cluster is on nfs today, with 500gb NVME SiLOG. Under heavy IO the
> vm's are thrown into paused state instead of iowait. A prior email chain
> identified a code error in qemu, with a repro using nothing more than DD to
> set 2 gb on the virtual disk to 0's .
>
> Since the point of the system is to handle massive IO workloads, this is
> obviously not acceptable.
>
> If there is a way to make the nfs Mount more robust I'm all for it over
> the headaches that go with managing block io.
>
> On Wed, Mar 2, 2022, 8:46 AM Nir Soffer  wrote:
>
>> On Wed, Mar 2, 2022 at 3:01 PM David Johnson <
>> djohn...@maxistechnology.com> wrote:
>>
>>> Good morning folks, and thank you in advance.
>>>
>>> I am working on migrating my oVirt backing store from NFS to iSCSI.
>>>
>>> *oVirt Environment:*
>>>
>>> oVirt Open Virtualization Manager
>>> Software Version:4.4.4.7-1.el8
>>>
>>> *TrueNAS environment:*
>>>
>>> FreeBSD truenas.local 12.2-RELEASE-p11 75566f060d4(HEAD) TRUENAS amd64
>>>
>>>
>>> The iSCSI share is on a TrueNAS server, exposed to user VDSM and group
>>> 36.
>>>
>>> oVirt sees the targeted share, but is unable to make use of it.
>>>
>>> The latest issue is "Error while executing action New SAN Storage
>>> Domain: Volume Group block size error, please check your Volume Group
>>> configuration, Supported block size is 512 bytes."
>>>
>>> As near as I can tell, oVirt does not support any block size other than
>>> 512 bytes, while TrueNAS's smallest OOB block size is 4k.
>>>
>>
>> This is correct, oVirt does not support 4k block storage.
>>
>>
>>>
>>> I know that oVirt on TrueNAS is a common configuration, so I expect I am
>>> missing something really obvious here, probably a TrueNAS configuration
>>> needed to make TrueNAS work with 512 byte blocks.
>>>
>>> Any advice would be helpful.
>>>
>>
>> You can use NFS exported by TrueNAS. With NFS the underlying block size
>> is hidden
>> since direct I/O on NFS does not perform direct I/O on the server.
>>
>> Another way is to use Managed Block Storage (MBS) - if there a Cinder
>> driver that can manage
>> your storage server, you can use MBS disks with any block size. The block
>> size limit comes from
>> the traditional lvm based storage domain code. When using MBS, you use
>> one LUN per disk, and
>> qemu does not have any issue working with such LUNs.
>>
>> Check with TrueNAS if they support emulating 512 block size of have
>> another way to
>> support clients that do not support 4k storage.
>>
>> Nir
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6NLGE4Q2ABJ2DEP7MXFRZ3QLQNP37A5V/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZHWXMG45F2IMAM3WCBLUA2W5L5MXHHH/


[ovirt-users] Does memory ballooning works if memory overcommit is disabled in the cluster

2022-03-03 Thread sohail_akhter3
Hi All,

We have one ovirt 4.4 environment running. In Cluster Optimization settings we 
have checked  "None - Disable memory overcommit" Option in Memory Optimization. 
But Memory Balloon check box option is enabled. My understanding is that 
Ballooning only works when memory overcommit is enabled. If it is true then 
this check box should be disabled when we are not overcommitting memory. Or 
Still memory ballooning works even if we disabled memory overcommit. According 
to below link ballooning works when memory overcommit is enabled.

https://lists.ovirt.org/pipermail/users/2017-October/084675.html

Please let me know if any further information is required.

Many thanks. 

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQS3IIZN47MMMWE3DJLTKLPHMKHNRKWC/


[ovirt-users] GlusterFS poor performance

2022-03-03 Thread francesco--- via Users
Hi all,

I'm running a glusterFS setup v 8.6 with two node and one arbiter. Both nodes 
and arbiter are CentOS 8 Stream with oVirt 4.4. Under gluster I have a LVM thin 
partition.

VMs running in this cluster have really poor write performance, when a test 
directly performend on the disk score about 300 MB/s

dd test on host1:

[root@ovirt-host1 tmp]# dd if=/dev/zero of=./foo.dat bs=256M count=1 oflag=dsync
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.839861 s, 320 MB/s

dd test on host1 on gluster:

[root@ovirt-host1 tmp]# dd if=/dev/zero 
of=/rhev/data-center/mnt/glusterSD/ovirt-host1:_data/foo.dat bs=256M count=1 
oflag=dsync
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 50.6889 s, 5.3 MB/s

Nontheless, the write results in a VM inside the cluster is a little bit faster 
(dd results vary from 15 MB/s to 60 MB/s)  and this is very strange to me:

root@vm1-ha:/tmp# dd if=/dev/zero of=./foo.dat bs=256M count=1 oflag=dsync; rm 
-f ./foo.dat
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 5.58727 s, 48.0 MB/s


Here's the actual gluster configuration, I also applied  some paramaters in 
/var/lib/glusterd/groups/virt as mentioned in other ovirt thread related I 
found.


gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: 09b532eb-57de-4c29-862d-93993c990e32
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt-host1:/gluster_bricks/data/data
Brick2: ovirt-host2:/gluster_bricks/data/data
Brick3: ovirt-arbiter:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
server.event-threads: 4
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.server-quorum-type: server
cluster.lookup-optimize: off
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.choose-local: off
client.event-threads: 4
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable


The speed between two hosts is about 1Gb/s:

[root@ovirt-host1 ~]# iperf3 -c ovirt-host2 -p 5002
Connecting to host ovirt-host2 port 5002
[  5] local x.x.x.x port 58072 connected to y.y.y.y port 5002
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec   112 MBytes   938 Mbits/sec  117375 KBytes
[  5]   1.00-2.00   sec   112 MBytes   937 Mbits/sec0397 KBytes
[  5]   2.00-3.00   sec   110 MBytes   924 Mbits/sec   18344 KBytes
[  5]   3.00-4.00   sec   112 MBytes   936 Mbits/sec0369 KBytes
[  5]   4.00-5.00   sec   111 MBytes   927 Mbits/sec   12386 KBytes
[  5]   5.00-6.00   sec   112 MBytes   938 Mbits/sec0471 KBytes
[  5]   6.00-7.00   sec   108 MBytes   909 Mbits/sec   34382 KBytes
[  5]   7.00-8.00   sec   112 MBytes   942 Mbits/sec0438 KBytes
[  5]   8.00-9.00   sec   111 MBytes   928 Mbits/sec   38372 KBytes
[  5]   9.00-10.00  sec   111 MBytes   934 Mbits/sec0481 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  1.08 GBytes   931 Mbits/sec  219 sender
[  5]   0.00-10.04  sec  1.08 GBytes   926 Mbits/sec  receiver

iperf Done.

Between nodes and arbiter about 200MB/s

[  5] local ovirt-arbiter port 45220 connected to ovirt-host1 port 5002
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  30.6 MBytes   257 Mbits/sec  1177281 KBytes
[  5]   1.00-2.00   sec  26.2 MBytes   220 Mbits/sec0344 KBytes
[  5]   2.00-3.00   sec  28.8 MBytes   241 Mbits/sec   15288 KBytes
[  5]   3.00-4.00   sec  26.2 MBytes   220 Mbits/sec0352 KBytes
[  5]   4.00-5.00   sec  30.0 MBytes   252 Mbits/sec   32293 KBytes
[  5]   5.00-6.00   sec  26.2 MBytes   220 Mbits/sec0354 KBytes
[  5]   6.00-7.00   sec  30.0 MBytes   252 Mbits/sec   32293 KBytes
[  5]   7.00-8.00   sec  27.5 MBytes   231 Mbits/sec0355 KBytes
[  5]   8.00-9.00   sec  28.8 MBytes   241 Mbits/sec   30294 KBytes
[  5]   9.00-10.00  sec  26.2 MBytes   220 Mbits/sec3250 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec   281 MBytes   235 Mbits/sec  1289 sender
[  5]   0.00-10.03  sec   277 MBytes   232 Mbits/sec  receiver

iperf Done.



I definitely missing something obvious and I'm not a gluster/ovirt black 
bealt... Can anyone point me to the right way?

Thank you for your time.

Regards,
Francesco
___
Users 

[ovirt-users] Re: VM Disk extend not reflected in VM oS

2022-03-03 Thread simon
Thanks All,

Rescans and reboot didn't resolve this - disk size increase still not seen by 
OS.

Any other suggestions?

Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYDINHGAK4NTYTTQNH2CZX7PO3S7Y5HB/