[ovirt-users] Re: Sometimes paused due to unknown storage error on gluster

2020-03-27 Thread Gianluca Cecchi
On Sat, Mar 28, 2020 at 2:21 AM Gianluca Cecchi 
wrote:

> Hello,
> having deployed oVirt 4.3.9 single host HCI with Gluster, I see some times
> VM going into paused state for the error above and needing to manually run
> it (sometimes this resumal operation fails).
> Actually it only happened with empty disk (thin provisioned) and sudden
> high I/O during the initial phase of install of the OS; it didn't happened
> then during normal operaton (even with 600MB/s of throughput).
> I suspect something related to metadata extension not able to be in pair
> with the speed of the physical disk growing similar to what happens for
> block based storage domains where the LVM layer has to extend the logical
> volume representing the virtual disk
>
> My real world reproduction of the error is during install of OCP 4.3.8
> master node, when Red Hat Cores OS boots from network and wipes the disk
> and I think then transfer an image, so doing high immediate I/O.
> The VM used as master node has been created with a 120Gb thin provisioned
> disk (virtio-scsi type) and starts with disk just initialized and empty,
> going through PXE install.
> I get this line inside events for the VM
>
> Mar 27, 2020, 12:35:23 AM VM master01 has been paused due to unknown
> storage error.
>
> Here logs around the time frame above:
>
> - engine.log
>
> https://drive.google.com/file/d/1zpNo5IgFVTAlKXHiAMTL-uvaoXSNMVRO/view?usp=sharing
>
> - vdsm.log
>
> https://drive.google.com/file/d/1v8kR0N6PdHBJ5hYzEYKl4-m7v1Lb_cYX/view?usp=sharing
>
> Any suggestions?
>
> The disk of the VM is on vmstore storage domain and its gluster volume
> settings are:
>
> [root@ovirt tmp]# gluster volume info vmstore
>
> Volume Name: vmstore
> Type: Distribute
> Volume ID: a6203d77-3b9d-49f9-94c5-9e30562959c4
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: ovirtst.mydomain.storage:/gluster_bricks/vmstore/vmstore
> Options Reconfigured:
> performance.low-prio-threads: 32
> storage.owner-gid: 36
> performance.read-ahead: off
> user.cifs: off
> storage.owner-uid: 36
> performance.io-cache: off
> performance.quick-read: off
> network.ping-timeout: 30
> features.shard: on
> network.remote-dio: off
> cluster.eager-lock: enable
> performance.strict-o-direct: on
> transport.address-family: inet
> nfs.disable: on
> [root@ovirt tmp]#
>
> What about config above, related to eventual optimizations to be done
> based on having single host?
> And comparing with the virt group of options:
>
> [root@ovirt tmp]# cat /var/lib/glusterd/groups/virt
> performance.quick-read=off
> performance.read-ahead=off
> performance.io-cache=off
> performance.low-prio-threads=32
> network.remote-dio=enable
> cluster.eager-lock=enable
> cluster.quorum-type=auto
> cluster.server-quorum-type=server
> cluster.data-self-heal-algorithm=full
> cluster.locking-scheme=granular
> cluster.shd-max-threads=8
> cluster.shd-wait-qlength=1
> features.shard=on
> user.cifs=off
> cluster.choose-local=off
> client.event-threads=4
> server.event-threads=4
> performance.client-io-threads=on
> [root@ovirt tmp]#
>
> ?
>
> Thanks Gianluca
>
>
Further information.
What I see around time frame in gluster brick log file
gluster_bricks-vmstore-vmstore.log (timestamp is behind 1 hour in log file)

[2020-03-27 23:30:38.575808] I [MSGID: 101055]
[client_t.c:436:gf_client_unref] 0-vmstore-server: Shutting down connection
CTX_ID:6e8f70b8-1946-4505-860f-be90e5807cb3-GRAPH_ID:0-PID:223418-HOST:ovirt.mydomain.local-PC_NAME:vmstore-client-0-RECON_NO:-0
[2020-03-27 23:35:15.281449] E [MSGID: 113072]
[posix-inode-fd-ops.c:1886:posix_writev] 0-vmstore-posix: write failed:
offset 0, [Invalid argument]
[2020-03-27 23:35:15.281545] E [MSGID: 115067]
[server-rpc-fops_v2.c:1373:server4_writev_cbk] 0-vmstore-server: 34139378:
WRITEV 10 (00d9fe81-8a31-498e-8401-7b9d1477378e), client:
CTX_ID:d04437ba-ef98-43df-864f-5e9d3738620a-GRAPH_ID:0-PID:27687-HOST:ovirt.mydomain.local-PC_NAME:vmstore-client-0-RECON_NO:-0,
error-xlator: vmstore-posix [Invalid argument]
[2020-03-27 23:40:15.415794] E [MSGID: 113072]
[posix-inode-fd-ops.c:1886:posix_writev] 0-vmstore-posix: write failed:
offset 0, [Invalid argument]

My gluster components' version:

gluster-ansible-cluster-1.0.0-1.el7.noarch
gluster-ansible-features-1.0.5-3.el7.noarch
gluster-ansible-infra-1.0.4-3.el7.noarch
gluster-ansible-maintenance-1.0.1-1.el7.noarch
gluster-ansible-repositories-1.0.1-1.el7.noarch
gluster-ansible-roles-1.0.5-7.el7.noarch
glusterfs-6.8-1.el7.x86_64
glusterfs-api-6.8-1.el7.x86_64
glusterfs-cli-6.8-1.el7.x86_64
glusterfs-client-xlators-6.8-1.el7.x86_64
glusterfs-events-6.8-1.el7.x86_64
glusterfs-fuse-6.8-1.el7.x86_64
glusterfs-geo-replication-6.8-1.el7.x86_64
glusterfs-libs-6.8-1.el7.x86_64
glusterfs-rdma-6.8-1.el7.x86_64
glusterfs-server-6.8-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.6.x86_64
python2-gluster-6.8-1.el7.x86_64
vdsm-gluster-4.30.43-1.el7.x86_64

And for completeness, the 

[ovirt-users] Sometimes paused due to unknown storage error on gluster

2020-03-27 Thread Gianluca Cecchi
Hello,
having deployed oVirt 4.3.9 single host HCI with Gluster, I see some times
VM going into paused state for the error above and needing to manually run
it (sometimes this resumal operation fails).
Actually it only happened with empty disk (thin provisioned) and sudden
high I/O during the initial phase of install of the OS; it didn't happened
then during normal operaton (even with 600MB/s of throughput).
I suspect something related to metadata extension not able to be in pair
with the speed of the physical disk growing similar to what happens for
block based storage domains where the LVM layer has to extend the logical
volume representing the virtual disk

My real world reproduction of the error is during install of OCP 4.3.8
master node, when Red Hat Cores OS boots from network and wipes the disk
and I think then transfer an image, so doing high immediate I/O.
The VM used as master node has been created with a 120Gb thin provisioned
disk (virtio-scsi type) and starts with disk just initialized and empty,
going through PXE install.
I get this line inside events for the VM

Mar 27, 2020, 12:35:23 AM VM master01 has been paused due to unknown
storage error.

Here logs around the time frame above:

- engine.log
https://drive.google.com/file/d/1zpNo5IgFVTAlKXHiAMTL-uvaoXSNMVRO/view?usp=sharing

- vdsm.log
https://drive.google.com/file/d/1v8kR0N6PdHBJ5hYzEYKl4-m7v1Lb_cYX/view?usp=sharing

Any suggestions?

The disk of the VM is on vmstore storage domain and its gluster volume
settings are:

[root@ovirt tmp]# gluster volume info vmstore

Volume Name: vmstore
Type: Distribute
Volume ID: a6203d77-3b9d-49f9-94c5-9e30562959c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirtst.mydomain.storage:/gluster_bricks/vmstore/vmstore
Options Reconfigured:
performance.low-prio-threads: 32
storage.owner-gid: 36
performance.read-ahead: off
user.cifs: off
storage.owner-uid: 36
performance.io-cache: off
performance.quick-read: off
network.ping-timeout: 30
features.shard: on
network.remote-dio: off
cluster.eager-lock: enable
performance.strict-o-direct: on
transport.address-family: inet
nfs.disable: on
[root@ovirt tmp]#

What about config above, related to eventual optimizations to be done based
on having single host?
And comparing with the virt group of options:

[root@ovirt tmp]# cat /var/lib/glusterd/groups/virt
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
cluster.server-quorum-type=server
cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular
cluster.shd-max-threads=8
cluster.shd-wait-qlength=1
features.shard=on
user.cifs=off
cluster.choose-local=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on
[root@ovirt tmp]#

?

Thanks Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIN4R63I6ITOQS4YKYXG2KPEVLJ6FKD2/


[ovirt-users] Re: Doubts related to single HCI and storage network

2020-03-27 Thread Gianluca Cecchi
On Fri, Mar 27, 2020 at 6:49 PM Strahil Nikolov 
wrote:

>
>
> Hey Gianluca,
>
> If there is an option to define the Gluster network during deployment - it
> should work like that.
>
> Can you go to the UI -> your cluster -> and check the network is there.
> Maybe it was created but not marked as storage network.
>
> Best Regards,
> Strahil Nikolov
>

Hi Strahil,
there is no option to define a "Gluster network" logical network during HCI
deployment; there is only an indirect way because you specify IP/hostname
of the network to use for Gluster.

Following "Deploying Red Hat Hyperconverged Infrastructure for
Virtualization on a single node" doc:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.7/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/index

at the beginning when you install the host from the node iso you specify
its hostname/IP
--> in my case ovirt.mydomain

Then here below In the first window you choose the "Start" button on the
right for "Hyperconverged":
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.7/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/task-config-single-node#task-config-gluster-single-node

then you select the right button "Run Gluster Wizard For Single Node"
and, as the manual says, in the window titled "Gluster Deployment" :
Specify the back-end FQDN on the storage network of the hyperconverged host
--> in my case used ovirtst.mydomain.storage

You go through and indeed oVirt is using for Gluster the network you
specified above, but in
Compute -> Clusters -> Default -> Logical Networks -> Manage Networks
no logical network has been created and then assigned to the column named
"Gluster Network"

This is what I mean and the reason I think I get the WARNING in engine.log
of my first post: I would have expected setup to create a logical network
(eg "Gluster") and automatically assign it to the "Gluster Network" type.
Something similar to what is already automatically done for the ovirtmgmt
logical network assignment, as a logic.

Anyway, following also what described thorugh here:
http://www.chrisj.cloud/?q=node/8

I made these manual steps without having any disruption inside my running
VMs:

1) Compute -> Clusters -> Default -> Logical Networks -> Add Network
Name: Gluster
VM network --> unchecked
MTU --> Custom 9000 (as this is the MTU used and already in place on the
host)
In the left side pane selected "Cluster" and uncheck "required" because not
yet assigned to host and to avoid disruption

2) Compute -> Clusters -> Default -> Logical Networks -> Manage Networks
Select the "Gluster" logical network as "Gluster Network" type

3) Hosts --> My host --> Network Interfaces --> Setup host networks
Drag and drop the "Gluster" network at right on to the network interface at
left, already configured with the static ip related to storage network used
during setup
OK button

4) Eventually mark now the "Gluster" logical network as a required one for
the cluster

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QHS54HAP5HZPPO3XNN2SVCXEHVGJNINE/


[ovirt-users] Re: bare-metal to self-hosted engine

2020-03-27 Thread Staniforth, Paul
Hello Kim,
   as it says in the documentation a SHE setup  is more complex 
to maintain, see the following about upgrades, configuration changes and 
backups.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/chap-administering_the_self-hosted_engine

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/chap-backups_and_migration#Backing_up_and_Restoring_a_Self-hosted_Engine

Also there doesn't seem to be any documentation to move from back from SHE to a 
standalone engine.

Regards,
Paul S.

From: Strahil Nikolov 
Sent: 26 March 2020 15:14
To: users@ovirt.org ; kim.karga...@noroff.no 

Subject: [ovirt-users] Re: bare-metal to self-hosted engine

Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

On March 26, 2020 12:24:21 PM GMT+02:00, kim.karga...@noroff.no wrote:
>Hi,
>
>We currently have an ovirt engine running on a server. The server has
>CentOS installed and the ovirt-engine installed, but is not a node that
>hosts VM's. I would like to move the ovirt-engine to a self-hosted
>engine and it seems like this articles is the one to follow:
>https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fdocumentation%2Fself-hosted%2Fchap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment.htmldata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=CBludiNwfUpA6%2Bbg7ndcC28Sd64uL8BCAZucZBBPZ6Q%3Dreserved=0
>Am I correct that I can migrate from a bare-metal CentOS server engine
>to a self-hosted VM of the engine and is the documentation above the
>only documentation I will need to complete this process?
>
>Kind regards
>
>Kim
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: 
>https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.htmldata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=tdJkpZaTtIO4NBLIGgKMTxDkJW7uAS9AhCH0RT8UL9g%3Dreserved=0
>oVirt Code of Conduct:
>https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=jUTwcaZ52mrSO%2BCnWfhpzcylsjlyoL6ScihP1utt%2F0M%3Dreserved=0
>List Archives:
>https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FTWP27VKE6WCP4S3QSPINOESGOZZ6HPJV%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=AB7%2BkmYmI5fz9e%2FNNj0AOeChw9hHck4XB%2F2iUr%2BH7Q4%3Dreserved=0

Hi Kim,

The link is 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fdocumentation%2Fmigrating_from_a_standalone_manager_to_a_self-hosted_engine%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=Xz1oW%2BSeGq8F2xZGNh89rdXFlTxSuUazg3IF5zqUjZI%3Dreserved=0
 and it seems OK.
Also,  you can take a look at 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Faccess.redhat.com%2Fdocumentation%2Fen-us%2Fred_hat_virtualization%2F4.3%2Fhtml-single%2Fmigrating_from_a_standalone_manager_to_a_self-hosted_engine%2Findexdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=WL4YLvQU4YKAyDN8VFeUqdKNU%2F6Lu2EvsZmSoR13ExU%3Dreserved=0

I would recommend you to:
A)  creare a backup of the engine
B) create a test setup (for example in VMs) to prepare yourself for the process.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.htmldata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8c8f8c52d35493eaf1008d7d198d2e2%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637208326609022043sdata=tdJkpZaTtIO4NBLIGgKMTxDkJW7uAS9AhCH0RT8UL9g%3Dreserved=0
oVirt Code of Conduct: 

[ovirt-users] Re: Shutdown procedure for single host HCI Gluster

2020-03-27 Thread Nir Soffer
On Wed, Mar 25, 2020 at 2:49 AM Gianluca Cecchi
 wrote:
>
> On Wed, Mar 25, 2020 at 1:16 AM Nir Soffer  wrote:
>>
>>
>>
>> OK, found it - this issue is
>> https://bugzilla.redhat.com/1609029
>>
>> Simone provided this to solve the issue:
>> https://github.com/oVirt/ovirt-ansible-shutdown-env/blob/master/README.md
>>
>> Nir
>>
>
> Ok, I will try the role provided by Simone and Sandro with my 4.3.9 single 
> HCI host and report.

Looking at the bug comments, I'm not sure this ansible script address
the issues you reported. Please
file a bug if you still see these issues when using the script.

We may need to solve this in vdsm-tool, adding an easy way to stop the
spm and disconnect from
storage cleanly. When we have such way the ansible script can use it.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTRRP3PHE2CQU2R7BJHWHIY64SMQT76E/


[ovirt-users] Re: oVirt 4.4.0 Beta release is now available for testing

2020-03-27 Thread Nir Soffer
On Fri, Mar 27, 2020 at 5:52 PM Sandro Bonazzola 
wrote:

> oVirt 4.4.0 Beta release is now available for testing
>
> The oVirt Project is excited to announce the availability of the beta
> release of oVirt 4.4.0 for testing, as of March 27th, 2020
>
> This release unleashes an altogether more powerful and flexible open
> source virtualization solution that encompasses hundreds of individual
> changes and a wide range of enhancements across the engine, storage,
> network, user interface, and analytics on top of oVirt 4.3.
>
> Important notes before you try it
>
> Please note this is a Beta release.
>
> The oVirt Project makes no guarantees as to its suitability or usefulness.
>
> This pre-release must not to be used in production.
>
> In particular, please note that upgrades from 4.3 and future upgrades from
> this beta to the final 4.4 release from this version are not supported.
>
> Some of the features included in oVirt 4.4.0 Beta require content that
> will be available in CentOS Linux 8.2 which are currently included in Red
> Hat Enterprise Linux 8.2 beta. If you want to have a better experience you
> can test oVirt 4.4.0 Beta on Red Hat Enterprise Linux 8.2 beta.
>
> Known Issues
>
>-
>
>ovirt-imageio development is still in progress. In this beta you can’t
>upload images to data domains. You can still copy iso images into the
>deprecated ISO domain for installing VMs.
>
>
Correction, upload and download to/from data domains is fully functional via
the REST API and SDK.

For upload and download via the SDK, please see:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py
Both scripts are standalone command line tool, try --help for more info.

Upload/download from UI (via browser) is not supported yet, since engine is
not
completely ported to python 3.

> Installation instructions
>
> For the engine: either use appliance or:
>
> - Install CentOS Linux 8 minimal from
> http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
>
> - dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
> - dnf update (reboot if needed)
>
> - dnf module enable -y javapackages-tools pki-deps 389-ds
>
> - dnf install ovirt-engine
>
> - engine-setup
>
> For the nodes:
>
> Either use oVirt Node ISO or:
>
> - Install CentOS Linux 8 from
> http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
> ; select minimal installation
>
> - dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
> - dnf update (reboot if needed)
>
> - Attach the host to engine and let it be deployed.
>
> What’s new in oVirt 4.4.0 Beta?
>
>-
>
>Hypervisors based on CentOS Linux 8 (rebuilt from award winning
>RHEL8), for both oVirt Node and standalone CentOS Linux hosts
>-
>
>Easier network management and configuration flexibility with
>NetworkManager
>-
>
>VMs based on a more modern Q35 chipset with legacy seabios and UEFI
>firmware
>-
>
>Support for direct passthrough of local host disks to VMs
>-
>
>Live migration improvements for High Performance guests.
>-
>
>New Windows Guest tools installer based on WiX framework now moved to
>VirtioWin project
>-
>
>Dropped support for cluster level prior to 4.2
>-
>
>Dropped SDK3 support
>-
>
>4K disks support
>
>
Correction, 4k is supported only for file based storage. iSCSI/FC storage
do not support 4k disks yet.


>
>-
>
>Exporting a VM to a data domain
>-
>
>Editing of floating disks
>-
>
>Integrating ansible-runner into engine, which allows a more detailed
>monitoring of playbooks executed from engine
>-
>
>Adding/reinstalling hosts are now completely based on Ansible
>-
>
>The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
>should be configured by TripleO instead
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 8.1 or newer
>
> * CentOS Linux (or similar) 8.1 or newer
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
>
> * Red Hat Enterprise Linux 8.1 or newer
>
> * CentOS Linux (or similar) 8.1 or newer
>
> * oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
>
> See the release notes [1] for installation instructions and a list of new
> features and bugs fixed.
>
> If you manage more than one oVirt instance, OKD or RDO we also recommend
> to try ManageIQ .
>
> In such a case, please be sure  to take the qc2 image and not the ova
> image.
>
> Notes:
>
> - oVirt Appliance is already available for CentOS Linux 8
>
> - oVirt Node NG is already available for CentOS Linux 8
>
> Additional Resources:
>
> * Read more about the oVirt 4.4.0 release highlights:
> http://www.ovirt.org/release/4.4.0/
>

[ovirt-users] Re: vm console problem

2020-03-27 Thread Strahil Nikolov
On March 27, 2020 12:23:10 PM GMT+02:00, David David  wrote:
>here is debug from opening console.vv by remote-viewer
>
>2020-03-27 14:09 GMT+04:00, Milan Zamazal :
>> David David  writes:
>>
>>> yes i have
>>> console.vv attached
>>
>> It looks the same as mine.
>>
>> There is a difference in our logs, you have
>>
>>   Possible auth 19
>>
>> while I have
>>
>>   Possible auth 2
>>
>> So I still suspect a wrong authentication method is used, but I don't
>> have any idea why.
>>
>> Regards,
>> Milan
>>
>>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal :
 David David  writes:

> copied from qemu server all certs except "cacrl" to my
>desktop-station
> into /etc/pki/

 This is not needed, the CA certificate is included in console.vv
>and no
 other certificate should be needed.

> but remote-viewer is still didn't work

 The log looks like remote-viewer is attempting certificate
 authentication rather than password authentication.  Do you have
 password in console.vv?  It should look like:

   [virt-viewer]
   type=vnc
   host=192.168.122.2
   port=5900
   password=fxLazJu6BUmL
   # Password is valid for 120 seconds.
   ...

 Regards,
 Milan

> 2020-03-26 2:22 GMT+04:00, Nir Soffer :
>> On Wed, Mar 25, 2020 at 12:45 PM David David 
>> wrote:
>>>
>>> ovirt 4.3.8.2-1.el7
>>> gtk-vnc2-1.0.0-1.fc31.x86_64
>>> remote-viewer version 8.0-3.fc31
>>>
>>> can't open vm console by remote-viewer
>>> vm has vnc console protocol
>>> when click on console button to connect to a vm, the
>remote-viewer
>>> console disappear immediately
>>>
>>> remote-viewer debug in attachment
>>
>> You an issue with the certificates:
>>
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
>> ../src/vncconnection.c Set credential 2 libvirt
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Searching for certs in /etc/pki
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Searching for certs in /root/.pki
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate CA/cacert.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c No CA certificate provided, using GNUTLS
>global
>> trust
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate
>> libvirt/private/clientkey.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate
>> libvirt/clientcert.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Waiting for missing credentials
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Got all credentials
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c No CA certificate provided; trying the
>system
>> trust store instead
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> ../src/vncconnection.c Using the system trust store and CRL
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> ../src/vncconnection.c No client cert or key provided
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> ../src/vncconnection.c No CA revocation list provided
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
>> ../src/vncconnection.c Handshake was blocking
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
>> ../src/vncconnection.c Handshake was blocking
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
>> ../src/vncconnection.c Handshake was blocking
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> ../src/vncconnection.c Handshake done
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> ../src/vncconnection.c Validating
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
>> ../src/vncconnection.c Error: The certificate is not trusted
>>
>> Adding people that may know more about this.
>>
>> Nir
>>
>>


>>
>>

Hello,

You can try to take the engine's CA (maybe it's  useless) and put it on your 
system in:
/etc/pki/ca-trust/source/anchors (if it's  EL7 or a Fedora) and then run 
update-ca-trust

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4EHEN7DOTYND5BQDVFEGW3VK4PFNEOU/


[ovirt-users] Re: Doubts related to single HCI and storage network

2020-03-27 Thread Strahil Nikolov
On March 27, 2020 6:23:19 PM GMT+02:00, Gianluca Cecchi 
 wrote:
>On Fri, Mar 27, 2020 at 3:41 PM  wrote:
>
>> You need to have a separate gluster network as per documentation.
>>
>>
>>
>> Eric Evans
>>
>>
>>
>
>Yes Eric, indeed.
>And as you see from my post, the storage network has been specified
>during
>setup.
>And indeed it is the one (even if only with one host) used by Gluster.
>But apparently oVirt didn't setup e storage network for it as I
>expected
>So the 3 questions are:
>
>1) is this behavior expected?
>2) if so, do you think a useful thing to integrate storage network
>configuration during the setup phase if specified by the user?
>3) what is the expected flow to do now to configure the storage
>network? is
>downtime expected to do so?
>
>Thanks
>Gianluca

Hey Gianluca,

If there is an option to define the Gluster network during deployment - it 
should work like that.

Can you go to the UI -> your cluster -> and check the network is there.
Maybe it was created but not marked as storage network.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/US5D2IM3JI7UP63IPBST3ZFH64MKTVKF/


[ovirt-users] Re: Doubts related to single HCI and storage network

2020-03-27 Thread eevans
This is another good article but not official documentation.

https://blogs.ovirt.org/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

 

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Gianluca Cecchi  
Sent: Friday, March 27, 2020 12:23 PM
To: eev...@digitaldatatechs.com
Cc: users 
Subject: [ovirt-users] Re: Doubts related to single HCI and storage network

 

On Fri, Mar 27, 2020 at 3:41 PM mailto:eev...@digitaldatatechs.com> > wrote:

You need to have a separate gluster network as per documentation.

 

Eric Evans

 

 

Yes Eric, indeed.

And as you see from my post, the storage network has been specified during 
setup.

And indeed it is the one (even if only with one host) used by Gluster.

But apparently oVirt didn't setup e storage network for it as I expected

So the 3 questions are:

 

1) is this behavior expected?

2) if so, do you think a useful thing to integrate storage network 
configuration during the setup phase if specified by the user?

3) what is the expected flow to do now to configure the storage network? is 
downtime expected to do so?

 

Thanks

Gianluca

 

 

 

 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQLL53UNA5JVI4WNZNC273ENAE3VDN6J/


[ovirt-users] Re: Doubts related to single HCI and storage network

2020-03-27 Thread eevans
This is older documentation but I believe it still applies.

 

https://www.ovirt.org/develop/release-management/features/gluster/select-network-for-gluster.html

 

I would check with the Red Hat folks to make sure but my understanding is 
glusterfs has it’s own network. I would assume it would have to be defined 
beforehand for it work properly.

 

I hope this helps.

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Gianluca Cecchi  
Sent: Friday, March 27, 2020 12:23 PM
To: eev...@digitaldatatechs.com
Cc: users 
Subject: [ovirt-users] Re: Doubts related to single HCI and storage network

 

On Fri, Mar 27, 2020 at 3:41 PM mailto:eev...@digitaldatatechs.com> > wrote:

You need to have a separate gluster network as per documentation.

 

Eric Evans

 

 

Yes Eric, indeed.

And as you see from my post, the storage network has been specified during 
setup.

And indeed it is the one (even if only with one host) used by Gluster.

But apparently oVirt didn't setup e storage network for it as I expected

So the 3 questions are:

 

1) is this behavior expected?

2) if so, do you think a useful thing to integrate storage network 
configuration during the setup phase if specified by the user?

3) what is the expected flow to do now to configure the storage network? is 
downtime expected to do so?

 

Thanks

Gianluca

 

 

 

 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJFIRPTLQ3HRLNQASUMXJLB6FO6LPNK3/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Alex McWhirter

On 2020-03-27 05:28, Christian Reiss wrote:

Hey Alex,

you too, thanks for writing.
I'm on 64mb as per default for ovirt. We tried no sharding, 128mb
sharding, 64mb sharding (always with copying the disk). There was no
increase or decrease in disk speed in any way.

Besides losing HA capabilites, what other caveats?

-Chris.

On 24/03/2020 19:25, Alex McWhirter wrote:
Red hat also recommends a shard size of 512mb, it's actually the only 
shard size they support. Also check the chunk size on the LVM thin 
pools running the bricks, should be at least 2mb. Note that changing 
the shard size only applies to new VM disks after the change. Changing 
the chunk size requires making a new brick.


libgfapi brings a huge performance boost, in my opinion its almost a 
necessity unless you have a ton of extra disk speed / network 
throughput. Just be aware of the caveats.


--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KL6HLEIRQ6GCNP5YK7TY4UY52JOLFIC3/


You don't lose HA, you just loose live migration in between separate 
data centers or between gluster volumes. Live migration between nodes in 
the same DC / gluster volume still works fine. Some people have snapshot 
issues, i don't, but plan for problems just in case.


shard size 512MB will only affect new vm's, or new VM disks to be exact. 
LVM chunk size defaults to 2mb on CentOS 7.6+, but it should be a 
multiple of your raid stripe size. Stripe size should be fairly large, 
we use 512KB0 stripe sizes on the bricks, 2mb chunk sizes on lvm.


With that and about 90 disks we can saturate 10GBe, then we added in 
some SSD cache drives to lvm on the bricks, which helped a lot with 
random io.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNTWPEG5EZDAE22XOF5XHXHDJB3J65AP/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Strahil Nikolov
On March 27, 2020 2:49:13 PM GMT+02:00, Jorick Astrego  
wrote:
>
>On 3/24/20 7:25 PM, Alex McWhirter wrote:
>> Red hat also recommends a shard size of 512mb, it's actually the only
>> shard size they support. Also check the chunk size on the LVM thin
>> pools running the bricks, should be at least 2mb. Note that changing
>> the shard size only applies to new VM disks after the change.
>Changing
>> the chunk size requires making a new brick.
>>
>Regarding the chunk size, red hat tells me it depends on RAID or JBOD
>
>https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/Brick_Configuration
>
>chunksize
>An important parameter to be specified while creating a thin
>   pool is the chunk size,which is the unit of allocation. For good
>   performance, the chunk size for the thin pool and the parameters
>   of the underlying hardware RAID storage should be chosen so that
>they work well together.
>
>And regarding the shard size, you can fix that with storage live
>migration right? Use two volumes and domains and move them so they will
>adopt the new shard size...
>
>Am I correct that when you change the sharding on a running volume, it
>only applies for new disks? Or does it also apply to extensions to a
>current disk?
>
>
>
>
>
>
>
>Met vriendelijke groet, With kind regards,
>
>Jorick Astrego
>
>Netbulae Virtualization Experts 
>
>
>
>   Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
> KvK 08198180
>   Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
> BTW
>NL821234584B01
>
>

Shard size change is valid for new images,  but this can be fixed either via 
storage migration between volumes or via creating new disk and migrating within 
the OS (if possible).

Still, MTU is important and you can use
'ping -s-c 1 -M do destination ' to test.

Keep in mind that VLANs also  take some data in the packet (I think around 8 
bytes). Today I have set MTU 9100 on some servers in order to guarantee  that 
the app will be able to transfer 9000 bytes of data,  but this depends on the 
switches between the nodes and the NICs of the servers.

You can use  tracepath to detect if there  is a switch that doesn't support 
Jumbo Frames.

Actually setting ctdb with NFS Ganesha is quite easy  .  You will be able to 
get all 'goodies' from oVirt  (snapshots, live migration, etc) while using 
Higher performance via NFS Ganesha - which is like a gateway for the clients  
(while accessing all servers simultaneously),  so it will be better situated 
outside the Gluster servers.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V2XENQHC5XZUDXV3OKNGOHSD4CDQVPFU/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Strahil Nikolov
On March 27, 2020 11:26:25 AM GMT+02:00, Christian Reiss 
 wrote:
>Hey Jayme,
>
>thanks for replying; sorry for the delay.
>If I am understanding this right, there is no real official way to 
>enable libgfapi. If you somehow manage to get it running then you will 
>lose HA capabilities, which is something we like on our production
>servers.
>
>The most recent post I could find on the matter 
>(https://www.mail-archive.com/users@ovirt.org/msg59664.html) read like 
>its worth a try for hobyyists, but for production servers I do am a 
>little bit scared.
>
>Do you maybe have any document or other source that does work with
>4.3.x 
>versions and inspires confidence? :-)
>
>-Chris
>
>On 24/03/2020 19:49, Jayme wrote:
>> I strongly believe that FUSE mount is the real reason for poor 
>> performance in HCI and these minor gluster and other tweaks won't 
>> satisfy most seeking i/o performance. Enabling libgfapi is probably
>the 
>> best option. Redhat has recently closed bug reports related to
>libgfapi 
>> citing won't fix and one comment suggests that libgfapi was not
>showing 
>> good enough performance to bother with which appears to contradict
>what 
>> many oVirt users are seeing. It's confusing to me why libgfapi as a 
>> default option is not being given any priority.
>> 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1465810
>> 
>> "We do not plan to enable libgfapi for oVirt/RHV. We did not find
>enough 
>> performance improvement justification for it"

Hey All,
Direct  libvirt access via libgfapi causes  loss of some features , but this is 
not the only option.
You can always use NFS Ganesha, which uses libgfapi to reach the gluster 
servers, while providing access  via NFS.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GJQ5Y4ZT4WJRJIMYY7G3WNOURFBX2QGB/


[ovirt-users] Re: Doubts related to single HCI and storage network

2020-03-27 Thread Gianluca Cecchi
On Fri, Mar 27, 2020 at 3:41 PM  wrote:

> You need to have a separate gluster network as per documentation.
>
>
>
> Eric Evans
>
>
>

Yes Eric, indeed.
And as you see from my post, the storage network has been specified during
setup.
And indeed it is the one (even if only with one host) used by Gluster.
But apparently oVirt didn't setup e storage network for it as I expected
So the 3 questions are:

1) is this behavior expected?
2) if so, do you think a useful thing to integrate storage network
configuration during the setup phase if specified by the user?
3) what is the expected flow to do now to configure the storage network? is
downtime expected to do so?

Thanks
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5RU7KNBTHQNK3XNYQLUXIQWMIBNZ4ZIL/


[ovirt-users] oVirt 4.4.0 Beta release is now available for testing

2020-03-27 Thread Sandro Bonazzola
oVirt 4.4.0 Beta release is now available for testing

The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 for testing, as of March 27th, 2020

This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.

Important notes before you try it

Please note this is a Beta release.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not to be used in production.

In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.

Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 which are currently included in Red Hat
Enterprise Linux 8.2 beta. If you want to have a better experience you can
test oVirt 4.4.0 Beta on Red Hat Enterprise Linux 8.2 beta.

Known Issues

   -

   ovirt-imageio development is still in progress. In this beta you can’t
   upload images to data domains. You can still copy iso images into the
   deprecated ISO domain for installing VMs.

Installation instructions

For the engine: either use appliance or:

- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso

- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm

- dnf update (reboot if needed)

- dnf module enable -y javapackages-tools pki-deps 389-ds

- dnf install ovirt-engine

- engine-setup

For the nodes:

Either use oVirt Node ISO or:

- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
; select minimal installation

- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm

- dnf update (reboot if needed)

- Attach the host to engine and let it be deployed.

What’s new in oVirt 4.4.0 Beta?

   -

   Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
   for both oVirt Node and standalone CentOS Linux hosts
   -

   Easier network management and configuration flexibility with
   NetworkManager
   -

   VMs based on a more modern Q35 chipset with legacy seabios and UEFI
   firmware
   -

   Support for direct passthrough of local host disks to VMs
   -

   Live migration improvements for High Performance guests.
   -

   New Windows Guest tools installer based on WiX framework now moved to
   VirtioWin project
   -

   Dropped support for cluster level prior to 4.2
   -

   Dropped SDK3 support
   -

   4K disks support
   -

   Exporting a VM to a data domain
   -

   Editing of floating disks
   -

   Integrating ansible-runner into engine, which allows a more detailed
   monitoring of playbooks executed from engine
   -

   Adding/reinstalling hosts are now completely based on Ansible
   -

   The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
   should be configured by TripleO instead


This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.1 or newer

* CentOS Linux (or similar) 8.1 or newer

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.1 or newer

* CentOS Linux (or similar) 8.1 or newer

* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ .

In such a case, please be sure  to take the qc2 image and not the ova image.

Notes:

- oVirt Appliance is already available for CentOS Linux 8

- oVirt Node NG is already available for CentOS Linux 8

Additional Resources:

* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Doubts related to single HCI and storage network

2020-03-27 Thread eevans
You need to have a separate gluster network as per documentation.

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Gianluca Cecchi  
Sent: Friday, March 27, 2020 10:06 AM
To: users 
Subject: [ovirt-users] Doubts related to single HCI and storage network

 

Hello,

I deployed HCI 4.3.9 with gluster and single node from the cockpit based 
interface.

During install I specified the storage network, using

 

1) For the mgmt network and hostname of the hypervisor

172.16.0.30 ovirt.mydomain

 

2) for the storage network (even if not used in single host... but in case of 
future addition..)

10.50.50.11 ovirtst.mydomain.storage

 

all went good. And system runs quite ok: I was able to deploy an OCP 4.3.8 
cluster with 3 workers and 3 masters... a part from erratic "vm paused" 
messages for which I'm going to send a dedicated mail...

 

I see in engine.log warning messages of this kind:

 

2020-03-27 00:32:08,655+01 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler2) [15cbd52e] Could not associate brick 
'ovirtst.mydomain.storage:/gluster_bricks/engine/engine' of volume 
'40ad3b5b-4cc1-495a-815b-3c7e3436b15b' with correct network as no gluster 
network found in cluster '9cecfa02-6c6c-11ea-8a94-00163e0acd5c'

 

I would have expected the setup to create a gluster network as it was part of 
the initial configuration could this be a subject for an RFE?

What can I do to fix this warning?

Thanks,

Gianluca

 

 

 

 

 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4FZWICZYCAXSTUZSSMFEGMUG6A5K6YM/


[ovirt-users] Doubts related to single HCI and storage network

2020-03-27 Thread Gianluca Cecchi
Hello,
I deployed HCI 4.3.9 with gluster and single node from the cockpit based
interface.
During install I specified the storage network, using

1) For the mgmt network and hostname of the hypervisor
172.16.0.30 ovirt.mydomain

2) for the storage network (even if not used in single host... but in case
of future addition..)
10.50.50.11 ovirtst.mydomain.storage

all went good. And system runs quite ok: I was able to deploy an OCP 4.3.8
cluster with 3 workers and 3 masters... a part from erratic "vm paused"
messages for which I'm going to send a dedicated mail...

I see in engine.log warning messages of this kind:

2020-03-27 00:32:08,655+01 WARN
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [15cbd52e] Could not associate brick
'ovirtst.mydomain.storage:/gluster_bricks/engine/engine' of volume
'40ad3b5b-4cc1-495a-815b-3c7e3436b15b' with correct network as no gluster
network found in cluster '9cecfa02-6c6c-11ea-8a94-00163e0acd5c'

I would have expected the setup to create a gluster network as it was part
of the initial configuration could this be a subject for an RFE?
What can I do to fix this warning?
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K7UEGWDCMYWQ4XJYSIJFP47PGZ744GQP/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Jorick Astrego

On 3/24/20 7:25 PM, Alex McWhirter wrote:
> Red hat also recommends a shard size of 512mb, it's actually the only
> shard size they support. Also check the chunk size on the LVM thin
> pools running the bricks, should be at least 2mb. Note that changing
> the shard size only applies to new VM disks after the change. Changing
> the chunk size requires making a new brick.
>
Regarding the chunk size, red hat tells me it depends on RAID or JBOD

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/Brick_Configuration

chunksize
An important parameter to be specified while creating a thin
pool is the chunk size,which is the unit of allocation. For good
performance, the chunk size for the thin pool and the parameters
of the underlying hardware RAID storage should be chosen so that
they work well together.

And regarding the shard size, you can fix that with storage live
migration right? Use two volumes and domains and move them so they will
adopt the new shard size...

Am I correct that when you change the sharding on a running volume, it
only applies for new disks? Or does it also apply to extensions to a
current disk?







Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWTZEA54MHSX2AY233DYZZA2KICRPUAM/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Jayme
Christian,

I've been following along with interest, as I've also been trying
everything I can to improve gluster performance in my HCI cluster. My issue
is mostly latency related and my workloads are typically small file
operations which have been especially challenging.

Couple of things

1. About the MTU, did you also enable jumbo frames at switch level (if
applicable)? I have jumbo frames enabled but honestly didn't see much of an
impact from doing so.

2. About libgfapi. It's actually quite simple to enable it (at least if you
want to do some testing). It can be enabled on the hosted engine using
engine-config i.e. *engine-config -s LibgfApiSupported=true -- *from my
experience you can do this while VMs are running and they won't pick up the
new config under powered off/restarted. So you are able to test it out on
one VM. Again, as and some others have mentioned this is not a default
option in oVirt because there are known bugs with the libgfapi
implementation. Some others have worked around these bugs in various ways
but like you, I am not willing to do so in a production environment. Still,
I think it's very much worth doing some tests on a VM with libgfapi enabled
compared to default fuse mount.



On Fri, Mar 27, 2020 at 7:44 AM Christian Reiss 
wrote:

> Hey,
>
> thanks for writing. If I go for dont choose local my speed drops
> dramatically (halving). Speed between the hosts is okay (tested) but for
> some odd reason the mtu is at 1500 still. I was sure I set it to
> jumbo/9k. Oh well.
>
> Not during runtime. I can hear the gluster scream if the network dies
> for a second :)
>
> -Chris.
>
> On 24/03/2020 18:33, Darrell Budic wrote:
>  >
>  > cluster.choose-local: false
>  > cluster.read-hash-mode: 3
>  >
>  > if you have separate servers or nodes with are not HCI to allow it
>  > spread reads over multiple nodes.
> --
>   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
> supp...@alpha-labs.net   \ /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>John Milton, Paradise lost.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYS7RIHXYAYW7XTPFVZBUHNGPFQMYA7H/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5JBBWOM3KGQ3FPY2OCW7ZBD4EGFEGDTR/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Jorick Astrego

On 3/27/20 11:01 AM, Christian Reiss wrote:
> Hey Strahil,
>
> as always: thanks!
>
> On 24/03/2020 12:23, Strahil Nikolov wrote:
>
>> performance.write-behind-window-size: 64MB (shard  size)
>
> This one doubled my speed from 200mb to 400mb!!
>
> I think this is where the meat is at.
>
> -Chris.

Won't this increase the risk of data loss? We have everything on dual
power feeds etc, so the risk of the having all or 2/3's of the gluster
nodes fail at the same time is very minimal.

But still when that happens? And with a shard size of 512MB this would
be performance.write-behind-window-size: 512MB?

Always tweaking ;-)






Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LCGZG5RP5AYWRYHM3XDFOJY6ZVY7CUL/


[ovirt-users] Re: can't run VM

2020-03-27 Thread eevans
Do you have the ovirtmgmt as the network card in the system?

Eric Evans
Digital Data Services LLC.
304.660.9080


-Original Message-
From: Strahil Nikolov  
Sent: Friday, March 27, 2020 12:18 AM
To: users@ovirt.org; garcialiang.a...@gmail.com
Subject: [ovirt-users] Re: can't run VM

On March 26, 2020 10:34:03 PM GMT+02:00, garcialiang.a...@gmail.com wrote:
>Hi,
>I created VM on ovirt-engine. But I can't run this VM. The message is :
>
>2020-03-26 21:28:02,745+01 ERROR
>[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>(default task-147) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run 
>VM VirtMachine due to a failed validation: [Cannot run VM. There is no 
>host that satisfies current scheduling constraints. See below for 
>details:, The host x did not satisfy internal filter Network 
>because display network ovirtmgmt was missing.] (User:
>admin@internal-authz).
>
>Could you help me ?
>
>Thanks
>Anne
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org Privacy 
>Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/JA7JKYDS7
>X6F3RMB6AVH5IB3JYXLAF74/

It seems that your VM has no interface on the ovirtmgmt, but that network is 
defined as mandatory.
Either add the network, or change it not to be mandatory.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5J2PZNDAKQXMFDH566J7YKMFRHTTFG7X/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKJHWP6UIUJMGQZ3GX6R6PTN7RMI2KTB/


[ovirt-users] Re: How to debug a failed Run Once launch

2020-03-27 Thread eevans
Can you give the exact error message or a screenshot? Check the engine.log as 
well for error messages.

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Shareef Jalloq  
Sent: Friday, March 27, 2020 6:30 AM
To: eev...@digitaldatatechs.com
Cc: users@ovirt.org
Subject: [ovirt-users] Re: How to debug a failed Run Once launch

 

So just to confirm all the options when I'm setting up the VM.  This is Windows 
Server 2019 so I'm selecting:

 

 - General:Operating System = Windows 2019 x64

 - System:Memory Size = 16GB

 - System:Total Virtual CPUs = 8

 - BootOptions:First Device = CD-ROM

 - BootOptions:Second Device = Hard Disk

 

Then in the Run Once config I'm attaching both the floppy and CD as well as 
updating the predefined boot sequence.

 

When I installed the virtio-win package there are multiple VFD files in 
/usr/share/virtio-win/.  I'm assuming the win_amd64.vfd version is for 
non-server Windows installs so I picked virtio-win_servers_amd64.vfd.

 

Assuming that's all ok, it just seems to be a strange permissions issue...

 

 

On Fri, Mar 27, 2020 at 8:59 AM Shareef Jalloq mailto:shar...@jalloq.co.uk> > wrote:

Yep.

 

On Thu, Mar 26, 2020 at 7:25 PM mailto:eev...@digitaldatatechs.com> > wrote:

The Windows Server 2019 needs to be the first boot device on the run once. Is 
it set that way in the boot options?



 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Shareef Jalloq mailto:shar...@jalloq.co.uk> > 
Sent: Thursday, March 26, 2020 2:25 PM
To: users@ovirt.org  
Subject: [ovirt-users] How to debug a failed Run Once launch

 

Hi,

 

I'm trying to create a Windows Server 2019 VM and having found the virtio-win 
package that needed to be installed am facing the next hurdle.

 

I've followed the documentation and I'm using the Run Once option with the 
following boot options:

 

Attach Floppy: virtio-win_servers_amd64.vfd

Attach CD:  Win 2029 ISO

CD-ROM at top of Predefined Boot Sequence

 

Clicking OK starts the VM but it immediately fails with a Failed Launching pop 
up.

 

How do I go about debugging this?

 

Shareef.

 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFH2LE4F3RM3MC4FQBAB3ZNR3TN5J2LL/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Christian Reiss

Hey,

thanks for writing. If I go for dont choose local my speed drops 
dramatically (halving). Speed between the hosts is okay (tested) but for 
some odd reason the mtu is at 1500 still. I was sure I set it to 
jumbo/9k. Oh well.


Not during runtime. I can hear the gluster scream if the network dies 
for a second :)


-Chris.

On 24/03/2020 18:33, Darrell Budic wrote:
>
> cluster.choose-local: false
> cluster.read-hash-mode: 3
>
> if you have separate servers or nodes with are not HCI to allow it
> spread reads over multiple nodes.
--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYS7RIHXYAYW7XTPFVZBUHNGPFQMYA7H/


[ovirt-users] Re: How to debug a failed Run Once launch

2020-03-27 Thread Shareef Jalloq
So just to confirm all the options when I'm setting up the VM.  This is
Windows Server 2019 so I'm selecting:

 - General:Operating System = Windows 2019 x64
 - System:Memory Size = 16GB
 - System:Total Virtual CPUs = 8
 - BootOptions:First Device = CD-ROM
 - BootOptions:Second Device = Hard Disk

Then in the Run Once config I'm attaching both the floppy and CD as well as
updating the predefined boot sequence.

When I installed the virtio-win package there are multiple VFD files in
/usr/share/virtio-win/.  I'm assuming the win_amd64.vfd version is for
non-server Windows installs so I picked virtio-win_servers_amd64.vfd.

Assuming that's all ok, it just seems to be a strange permissions issue...


On Fri, Mar 27, 2020 at 8:59 AM Shareef Jalloq  wrote:

> Yep.
>
> On Thu, Mar 26, 2020 at 7:25 PM  wrote:
>
>> The Windows Server 2019 needs to be the first boot device on the run
>> once. Is it set that way in the boot options?
>>
>>
>>
>> Eric Evans
>>
>> Digital Data Services LLC.
>>
>> 304.660.9080
>>
>>
>>
>> *From:* Shareef Jalloq 
>> *Sent:* Thursday, March 26, 2020 2:25 PM
>> *To:* users@ovirt.org
>> *Subject:* [ovirt-users] How to debug a failed Run Once launch
>>
>>
>>
>> Hi,
>>
>>
>>
>> I'm trying to create a Windows Server 2019 VM and having found the
>> virtio-win package that needed to be installed am facing the next hurdle.
>>
>>
>>
>> I've followed the documentation and I'm using the Run Once option with
>> the following boot options:
>>
>>
>>
>> Attach Floppy: virtio-win_servers_amd64.vfd
>>
>> Attach CD:  Win 2029 ISO
>>
>> CD-ROM at top of Predefined Boot Sequence
>>
>>
>>
>> Clicking OK starts the VM but it immediately fails with a Failed
>> Launching pop up.
>>
>>
>>
>> How do I go about debugging this?
>>
>>
>>
>> Shareef.
>>
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUGQMYOJHQVMCYBEYTV6ZTEH73HE7OBB/


[ovirt-users] Re: vm console problem

2020-03-27 Thread David David
here is debug from opening console.vv by remote-viewer

2020-03-27 14:09 GMT+04:00, Milan Zamazal :
> David David  writes:
>
>> yes i have
>> console.vv attached
>
> It looks the same as mine.
>
> There is a difference in our logs, you have
>
>   Possible auth 19
>
> while I have
>
>   Possible auth 2
>
> So I still suspect a wrong authentication method is used, but I don't
> have any idea why.
>
> Regards,
> Milan
>
>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal :
>>> David David  writes:
>>>
 copied from qemu server all certs except "cacrl" to my desktop-station
 into /etc/pki/
>>>
>>> This is not needed, the CA certificate is included in console.vv and no
>>> other certificate should be needed.
>>>
 but remote-viewer is still didn't work
>>>
>>> The log looks like remote-viewer is attempting certificate
>>> authentication rather than password authentication.  Do you have
>>> password in console.vv?  It should look like:
>>>
>>>   [virt-viewer]
>>>   type=vnc
>>>   host=192.168.122.2
>>>   port=5900
>>>   password=fxLazJu6BUmL
>>>   # Password is valid for 120 seconds.
>>>   ...
>>>
>>> Regards,
>>> Milan
>>>
 2020-03-26 2:22 GMT+04:00, Nir Soffer :
> On Wed, Mar 25, 2020 at 12:45 PM David David 
> wrote:
>>
>> ovirt 4.3.8.2-1.el7
>> gtk-vnc2-1.0.0-1.fc31.x86_64
>> remote-viewer version 8.0-3.fc31
>>
>> can't open vm console by remote-viewer
>> vm has vnc console protocol
>> when click on console button to connect to a vm, the remote-viewer
>> console disappear immediately
>>
>> remote-viewer debug in attachment
>
> You an issue with the certificates:
>
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
> ../src/vncconnection.c Set credential 2 libvirt
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Searching for certs in /etc/pki
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Searching for certs in /root/.pki
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Failed to find certificate CA/cacert.pem
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c No CA certificate provided, using GNUTLS global
> trust
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Failed to find certificate
> libvirt/private/clientkey.pem
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Failed to find certificate
> libvirt/clientcert.pem
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Waiting for missing credentials
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c Got all credentials
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> ../src/vncconnection.c No CA certificate provided; trying the system
> trust store instead
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
> ../src/vncconnection.c Using the system trust store and CRL
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
> ../src/vncconnection.c No client cert or key provided
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
> ../src/vncconnection.c No CA revocation list provided
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
> ../src/vncconnection.c Handshake was blocking
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
> ../src/vncconnection.c Handshake was blocking
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
> ../src/vncconnection.c Handshake was blocking
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
> ../src/vncconnection.c Handshake done
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
> ../src/vncconnection.c Validating
> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
> ../src/vncconnection.c Error: The certificate is not trusted
>
> Adding people that may know more about this.
>
> Nir
>
>
>>>
>>>
>
>
(remote-viewer:47852): virt-viewer-DEBUG: 14:16:35.872: Opening display to 
/home/david/Downloads/console.vv
(remote-viewer:47852): virt-viewer-DEBUG: 14:16:35.873: Guest (null) has a vnc 
display
(remote-viewer:47852): gtk-vnc-DEBUG: 14:16:35.873: ../src/vncconnection.c Init 
VncConnection=0x5578f6b75bf0
(remote-viewer:47852): gtk-vnc-DEBUG: 14:16:35.873: ../src/vncdisplaykeymap.c 
Using X11 backend
(remote-viewer:47852): gtk-vnc-DEBUG: 14:16:35.873: ../src/vncdisplaykeymap.c 
XKB keyboard map name '(unnamed)'
(remote-viewer:47852): gtk-vnc-DEBUG: 14:16:35.874: ../src/vncdisplaykeymap.c 
Server vendor is 'Fedora Project'
(remote-viewer:47852): gtk-vnc-DEBUG: 14:16:35.874: ../src/vncdisplaykeymap.c 
Found extension 'Generic Event Extension'
(remote-viewer:47852): gtk-vnc-DEBUG: 14:16:35.874: 

[ovirt-users] Re: vm console problem

2020-03-27 Thread Milan Zamazal
David David  writes:

> yes i have
> console.vv attached

It looks the same as mine.

There is a difference in our logs, you have

  Possible auth 19

while I have

  Possible auth 2

So I still suspect a wrong authentication method is used, but I don't
have any idea why.

Regards,
Milan

> 2020-03-26 21:38 GMT+04:00, Milan Zamazal :
>> David David  writes:
>>
>>> copied from qemu server all certs except "cacrl" to my desktop-station
>>> into /etc/pki/
>>
>> This is not needed, the CA certificate is included in console.vv and no
>> other certificate should be needed.
>>
>>> but remote-viewer is still didn't work
>>
>> The log looks like remote-viewer is attempting certificate
>> authentication rather than password authentication.  Do you have
>> password in console.vv?  It should look like:
>>
>>   [virt-viewer]
>>   type=vnc
>>   host=192.168.122.2
>>   port=5900
>>   password=fxLazJu6BUmL
>>   # Password is valid for 120 seconds.
>>   ...
>>
>> Regards,
>> Milan
>>
>>> 2020-03-26 2:22 GMT+04:00, Nir Soffer :
 On Wed, Mar 25, 2020 at 12:45 PM David David  wrote:
>
> ovirt 4.3.8.2-1.el7
> gtk-vnc2-1.0.0-1.fc31.x86_64
> remote-viewer version 8.0-3.fc31
>
> can't open vm console by remote-viewer
> vm has vnc console protocol
> when click on console button to connect to a vm, the remote-viewer
> console disappear immediately
>
> remote-viewer debug in attachment

 You an issue with the certificates:

 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
 ../src/vncconnection.c Set credential 2 libvirt
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Searching for certs in /etc/pki
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Searching for certs in /root/.pki
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Failed to find certificate CA/cacert.pem
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c No CA certificate provided, using GNUTLS global
 trust
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Failed to find certificate
 libvirt/private/clientkey.pem
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Failed to find certificate
 libvirt/clientcert.pem
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Waiting for missing credentials
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c Got all credentials
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
 ../src/vncconnection.c No CA certificate provided; trying the system
 trust store instead
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
 ../src/vncconnection.c Using the system trust store and CRL
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
 ../src/vncconnection.c No client cert or key provided
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
 ../src/vncconnection.c No CA revocation list provided
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
 ../src/vncconnection.c Handshake was blocking
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
 ../src/vncconnection.c Handshake was blocking
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
 ../src/vncconnection.c Handshake was blocking
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
 ../src/vncconnection.c Handshake done
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
 ../src/vncconnection.c Validating
 (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
 ../src/vncconnection.c Error: The certificate is not trusted

 Adding people that may know more about this.

 Nir


>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPX2PHLII54CFWKEH7RTN3GPP7VQ2QVZ/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Christian Reiss

Hey Strahil,

as always: thanks!

On 24/03/2020 12:23, Strahil Nikolov wrote:

Hey Chris,

What type is your VM ?


CentOS7.


Try with 'High Performance' one (there is a  good RH documentation on that 
topic).


I was googly-eying that as well. Will try that tonight :)


1. Check the VM disk scheduler. Use 'noop/none' (depends on multiqueue is 
enabled) to allow  the Hypervisor aggregate the I/O requests from multiple VMs.
Next, set 'noop/none' disk scheduler  on the hosts - these 2 are the optimal 
for SSDs and NVME disks  (if I recall corectly you are  using SSDs)


Yeah the gluster disks do have noop already.


2. Disable cstates on the host and Guest (there are a lot of articles  about 
that)


Not sure its a CPU bottleneck in any capacity, but ill dig into this.


3. Enable MTU 9000 for Hypervisor (gluster node).


Already done.


4. You can try setting/unsetting the tunables in the db-workload  group and run 
benchmarks with real workload  .


Will also check!


5.  Some users  reported  that enabling  TCP offload  on the hosts gave huge  
improvement in performance  of gluster  - you can try that.
Of course  there are mixed  feelings - as others report  that disabling it 
brings performance. I guess  it is workload  specific.





performance.write-behind-window-size: 64MB (shard  size)


This one doubled my speed from 200mb to 400mb!!

I think this is where the meat is at.

-Chris.


--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QFDZMYIJNWBSXIWTYG5QYIAVJJ6W7R6J/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Christian Reiss

Hey Alex,

you too, thanks for writing.
I'm on 64mb as per default for ovirt. We tried no sharding, 128mb 
sharding, 64mb sharding (always with copying the disk). There was no 
increase or decrease in disk speed in any way.


Besides losing HA capabilites, what other caveats?

-Chris.

On 24/03/2020 19:25, Alex McWhirter wrote:
Red hat also recommends a shard size of 512mb, it's actually the only 
shard size they support. Also check the chunk size on the LVM thin pools 
running the bricks, should be at least 2mb. Note that changing the shard 
size only applies to new VM disks after the change. Changing the chunk 
size requires making a new brick.


libgfapi brings a huge performance boost, in my opinion its almost a 
necessity unless you have a ton of extra disk speed / network 
throughput. Just be aware of the caveats.


--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KL6HLEIRQ6GCNP5YK7TY4UY52JOLFIC3/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Christian Reiss

Hey Jayme,

thanks for replying; sorry for the delay.
If I am understanding this right, there is no real official way to 
enable libgfapi. If you somehow manage to get it running then you will 
lose HA capabilities, which is something we like on our production servers.


The most recent post I could find on the matter 
(https://www.mail-archive.com/users@ovirt.org/msg59664.html) read like 
its worth a try for hobyyists, but for production servers I do am a 
little bit scared.


Do you maybe have any document or other source that does work with 4.3.x 
versions and inspires confidence? :-)


-Chris

On 24/03/2020 19:49, Jayme wrote:
I strongly believe that FUSE mount is the real reason for poor 
performance in HCI and these minor gluster and other tweaks won't 
satisfy most seeking i/o performance. Enabling libgfapi is probably the 
best option. Redhat has recently closed bug reports related to libgfapi 
citing won't fix and one comment suggests that libgfapi was not showing 
good enough performance to bother with which appears to contradict what 
many oVirt users are seeing. It's confusing to me why libgfapi as a 
default option is not being given any priority.


https://bugzilla.redhat.com/show_bug.cgi?id=1465810

"We do not plan to enable libgfapi for oVirt/RHV. We did not find enough 
performance improvement justification for it"


--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMIC2K6QOQATGSUYZ7VRQO5O65LDJ7QJ/


[ovirt-users] Re: Speed Issues

2020-03-27 Thread Christian Reiss

Hey,

thanks for writing. Sorry about the delay.

On 25/03/2020 00:25, Nir Soffer wrote:

> These settings mean:
>
>> performance.strict-o-direct: on
>> network.remote-dio: enable
>
> That you are using direct I/O both on the client and server side.
I changed them to off, to no avail. Yields the same results.
>> Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
>> the same.
>
> How did you test this?
I ran

dd if=/dev/zero of=testfile oflag=direct bs=1M status=progress
(with varying block sized) on

- the mounted gluster brick (/gluster_bricks...)
- the mounted gluster volume (/rhev.../mount/...)
- inside a running VM

I also switched it around and read an image file from the gluster volume 
with the same speeds.

> Did you test reading from the storage on the server side using direct
> I/O? if not,
> you test accessing server buffer cache, which is pretty fast.
Which is where oflag comes in. I can confirm skipping that will results 
in really, really fast io until the buffer is full. oflag=direct shows 
~2gb on the raid, 200mb on gluster volume, still.

>> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
>> 366mb/sec while writes plummet to to 200mb/sec.
>
> This use direct I/O.
Even with the direct I/O turned on (which is off and yielding the same 
results) this is way too slow for direct IO.


> Please share the commands/configuration files used to perform the tests.
>
> Adding storage folks that can help with analyzing this.
I am happy to oblige and supply and required logs or profiling 
information if you'd be so kind to tell me which one, precisely.


Stay healthy!


--
 Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   supp...@alpha-labs.net   \ /Campaign
 X   against HTML
 WEB alpha-labs.net / \   in eMails

 GPG Retrieval https://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UFWOW4ARBCYLIK3HEE4IHOAFLPGPM22P/


[ovirt-users] Re: How to debug a failed Run Once launch

2020-03-27 Thread Shareef Jalloq
Yep.

On Thu, Mar 26, 2020 at 7:25 PM  wrote:

> The Windows Server 2019 needs to be the first boot device on the run once.
> Is it set that way in the boot options?
>
>
>
> Eric Evans
>
> Digital Data Services LLC.
>
> 304.660.9080
>
>
>
> *From:* Shareef Jalloq 
> *Sent:* Thursday, March 26, 2020 2:25 PM
> *To:* users@ovirt.org
> *Subject:* [ovirt-users] How to debug a failed Run Once launch
>
>
>
> Hi,
>
>
>
> I'm trying to create a Windows Server 2019 VM and having found the
> virtio-win package that needed to be installed am facing the next hurdle.
>
>
>
> I've followed the documentation and I'm using the Run Once option with the
> following boot options:
>
>
>
> Attach Floppy: virtio-win_servers_amd64.vfd
>
> Attach CD:  Win 2029 ISO
>
> CD-ROM at top of Predefined Boot Sequence
>
>
>
> Clicking OK starts the VM but it immediately fails with a Failed Launching
> pop up.
>
>
>
> How do I go about debugging this?
>
>
>
> Shareef.
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJ5DM65URL7XVT4X4WGYJSNPPKPVY27S/