[ovirt-users] error while setup

2019-04-24 Thread W3SERVICES
PLAY [gluster_servers] * TASK [Run a shell script] ** changed: [localhost.localdomain] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h localhost.localdomain) PLAY

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter
Every template, when you make a desktop VM out of it and then delete that VM. If you make a server VM there are no issues. On 2019-04-24 09:30, Benny Zlotnik wrote: Does it happen all the time? For every template you create? Or is it for a specific template? On Wed, Apr 24, 2019 at 12:59 PM

[ovirt-users] Re: Upgrade from 4.3.2 to 4.3.3 fails on database schema update

2019-04-24 Thread eshwayri
Thank you; that worked. Upgrade completed successfully. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:

[ovirt-users] Adding network to VM - What stupid thing have I missed?

2019-04-24 Thread eshwayri
When creating a new VM, it looks like I connect it's nic(s) under the "Instantiate VM network interfaces by picking a vNIC profile." setting. The problem I am seeing is that the drop down only has "Empty" and "br-kvm-prod" (my production bridge). I should have two more. Under Networks and

[ovirt-users] New disk creation very slow after upgrade to 4.3.3

2019-04-24 Thread Steffen Luitz
This is on a 3-node hyperconverged environment with glusterfs. After upgrading to oVirt 4.3.3 (from 4.3.2) creating a new disk takes very long (hours for a 100GByte disk, making it essentially impossible to create a new disk image). In the UI the default is "preallocated" but changing it to

[ovirt-users] Arbiter brick disk performance

2019-04-24 Thread Leo David
Hello Everyone, I need to look into adding some enterprise grade sas disks ( both ssd and spinning ), and since the prices are not too low, I would like to benefit of replica 3 arbitrated. Therefore, I intend to buy some smaller disks for use them as arbiter brick. My question is, what

[ovirt-users] [ANN] oVirt 4.3.3 second async update is now available

2019-04-24 Thread Sandro Bonazzola
The oVirt Team has just released a new version of the following packages: - ovirt-engine-4.3.3.6 The async release addresses the following bugs: - Bug 1701205 - Creating a new VM over the not defaulted cluster fails with "CPU Profile doesn't

[ovirt-users] Re: Arbiter brick disk performance

2019-04-24 Thread Strahil
I think 2 small ssds (raid 1 mdadm) can do the job better as ssds have lower latencies .You can use them both for OS (minimum needed is 60 GB) and the rest will be plenty for an arbiter. By the way, if you plan using gluster snapshots - use thin LVM for the brick. Best Regards, Strahil

[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Strahil
Fix those disconnectes node and run find against a node that has successfully mounted the volume. Best Regards, Strahil NikolovOn Apr 24, 2019 15:31, Andreas Elvers wrote: > > The file handle is stale so find will display: > > "find: >

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
Does it happen all the time? For every template you create? Or is it for a specific template? On Wed, Apr 24, 2019 at 12:59 PM Alex McWhirter wrote: > > oVirt is 4.2.7.5 > VDSM is 4.20.43 > > Not sure which logs are applicable, i don't see any obvious errors in > vdsm.log or engine.log. After

[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
After rebooting the node that was not able to mount the gluster volume things improved eventually. SPM went away and restarted for the Datacenter and suddenly node03 was able to mount the gluster volume. In between I was down to 1/3 active Bricks which results in read only glusterfs. I was

[ovirt-users] Re: Arbiter brick disk performance

2019-04-24 Thread Leo David
Thank you very much Strahil, very helpful. As always. So I would equip the 3rd server and alocate one small ( 120 - 240gb) consumer grade ssd for each of the gluster volume, and at volume creation, to specify the small ssds as the 3rd brick. Do it make sense ? Thank you ! On Wed, Apr 24, 2019,

[ovirt-users] Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
Hi, I am currently upgrading my oVirt setup from 4.2.8 to 4.3.3.1. The setup consists of: Datacenter/Cluster Default: [fully upgraded to 4.3.3.1] 2 nodes (node04,node05)- NFS storage domain with self hosted engine Datacenter Luise: Cluster1: 3 nodes (node01,node02,node03) - Node NG with

[ovirt-users] Template Disk Corruption

2019-04-24 Thread Alex McWhirter
1. Create server template from server VM (so it's a full copy of the disk) 2. From template create a VM, override server to desktop, so that it become a qcow2 overlay to the template raw disk. 3. Boot VM 4. Shutdown VM 5. Delete VM Template disk is now corrupt, any new machines made

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Benny Zlotnik
can you provide more info (logs, versions)? On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter wrote: > > 1. Create server template from server VM (so it's a full copy of the > disk) > > 2. From template create a VM, override server to desktop, so that it > become a qcow2 overlay to the template

[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
Restarting improved things a little bit. Still bricks on node03 are shown as down, but "gluster volume status" is looking better. Saiph:~ andreas$ ssh node01 gluster volume status vmstore Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter
oVirt is 4.2.7.5 VDSM is 4.20.43 Not sure which logs are applicable, i don't see any obvious errors in vdsm.log or engine.log. After you delete the desktop VM, and create another based on the template the new VM still boots, it just reports disk read errors and fails boot. On 2019-04-24

[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
"systemctl restart glusterd" on node03 did not help. Still getting: node03#: ls -l /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore ls: cannot access /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore: Transport endpoint is not connected Engine still

[ovirt-users] Re: Prevent 2 different VMs from running on the same host

2019-04-24 Thread Jorick Astrego
Hi, Yes, use affinity groups for this https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3-beta/html/virtual_machine_management_guide/sect-affinity_groups *The VM Affinity Rule* When you create an affinity group, you select the virtual machines that belong to

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread wodel youchi
Hi, I am not sure if I understood your question, but here is a statement from the install guide of RHHI (Deploying RHHI) : "You cannot create a volume that spans more than 3 nodes, or expand an existing volume so that it spans across more than 3 nodes at a time." Page 11 , 2.7 Scaling.

[ovirt-users] Prevent 2 different VMs from running on the same host

2019-04-24 Thread Paulo Silva
Hi, I have a cluster of 6 hosts using ovirt 4.3 and I want to make sure that 2 VMs are always started on different hosts. Is it possible to prevent 2 different VMs from running on the same physical host without specifying manually a different set of hosts where each VM can start running? Thanks

[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Strahil Nikolov
Try to run a find from a working server(for example node02): find /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore -exec stat {} \; Also, check if all peers see each other. Best Regards,Strahil Nikolov В сряда, 24 април 2019 г., 3:27:41 ч. Гринуич-4, Andreas Elvers

[ovirt-users] Unable to use MAC address starting with reserved value 0xFE

2019-04-24 Thread Ricardo Alonso
Is there a way to use a mac starting with FE? The machine has a license requirement attached to the mac address, and when I try to start it, it fails with the message: VM is down with error. Exit message: unsupported configuration: Unable to use MAC address starting with reserved value 0xFE -

[ovirt-users] Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

2019-04-24 Thread Andreas Elvers
The file handle is stale so find will display: "find: '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': Transport endpoint is not connected" "stat /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore" will output stat: cannot stat

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread Adrian Quintero
Strahil, this is the issue I am seeing now [image: image.png] The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil wrote: > I have edited my