[ovirt-users] Re: 10GB disk created on glusterfs storage has only 4096B in vm

2020-07-30 Thread shadow emy
Good that is ok for you now.
As Gianluca told you the command to see all the gluster volume settings is : 
gluster volume get vol_name all

 The previous command you used : gluster volume info vol_name   will list only 
modified settings from default and not all the settings. The output shows only 
volume "Options Reconfigured". 

Regarding if this is an important bug that affects gluster 7.6(or other 
versions) and should be disabled by default, i dont know for sure.
I am just using ovirt as a hyperconvergence tool.
 Maybe someone from the ovirt development team can answer that.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFVT6AMOI4IQJKR6SURWNEMWVY622Y6T/


[ovirt-users] Re: oVirt 4.3 -> 4.4 Upgrade Path Questions

2020-07-08 Thread shadow emy
Just a head up 4.4.1 was released today, which fix a big number of  upgrade bugs
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDGETPMEYF6JBZH35BXOH2YRS36JOCQX/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-07-07 Thread shadow emy
i found the problem.
The kernel version in Centos 7.8  with version 3.x.x is really too old and does 
not know how to handle fine new SSD disks or RAID Controllers with latest BIOS 
Updates applied.

Booting and Archlinux latest iso image with kernel 5.7.6 or a Centos 8.2 with 
kernel 4.18 increased the performance at the right values. 
I run  multiple dd tests on the above images using bs of 10, 100 and 1000M and  
had aconstant write speed of  1.1GB/s.This is the expected value for 2 SSD in 
RAID 0.

I had also enabled  cache settings on the Dell Perc 710 Raid controller : Write 
Cache set to "Write Back", disk cache set to "Enabled", read cache to "Read 
Ahead".For those who think "Write back" is a problem and the data might be 
corrupted, this should be ok now with the latest filesystem xfs or ext4 , that 
can recover in case of power loss.To make data safer, i also have a Raid cache 
battery and UPS redundancy.

Now i know i must run ovirt 4.4 with Centos 8.2 for good performance.
 I saw that Upgrading from 4.3 to 4.4 is not an easy task, multiple fails and 
not quite straight forward(i also have hosted engine on the shared Gluster 
Storage which makes this ipgrade even more difficult), but eventually i think i 
can get it running.

Thanks,
Emy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOFENYMPKXC6Z6MHOFFAUPPQCUFDNKHO/


[ovirt-users] Re: Lots of problems with deploying the hosted-engine (ovirt 4.4 | CentOS 8.2.2004)

2020-07-07 Thread shadow emy
Yes i also had a lot of problems installing ovirt 4.4 .I think it was not 
tested enough.
I am upgrading  from ovirt 4.3 to 4.4 using shared storage glusterfs, which 
makes things more difficult.

Regarding your error, i believe is something with the rpm ovirt 4.4 
repository(sometimes it times out and sometimes it's not).
First as you said check your disk space, before running the hosted-engine 
deploy.If the setup fails there is no ansible disk cleanup  task for /var/tmp, 
the clean task is only at the end of the ansible playbook when the deploy is 
with success.
What i tried with success  is to clear the rpm packages metadata, just before 
the deploy started  run : "dnf clean all" , then "dnf update" .
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DDSE6MI7XCB6T75ARSRFXY3FCE62TECU/


[ovirt-users] Re: oVirt-node 4.4.0 - Hosted engine deployment fails when host is unable to download updates

2020-07-07 Thread shadow emy
I forgot to mention,  after every failed deploy on a new host using ovirt 4.4  
you can run : 
/usr/share/ovirt-hosted-engine/scripts/ovirt-hosted-engine-cleanup to free up 
disk space and remove configs that did not finish to setup.

Thanks,
Emy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WZSXVGSV26MS73DRSFARKMVNC4WABXI/


[ovirt-users] Re: Lots of problems with deploying the hosted-engine (ovirt 4.4 | CentOS 8.2.2004)

2020-07-07 Thread shadow emy
I am using command line hosted-engine --deploy for install, no cockpit.
I had problems with rpm metadata and the deployed failed, but yes in your case 
might be IPv6 problems.
I am using IPv4, never tried IPv6 on Ovirt.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OUWKGGVZP2UN7NZPRTNXJ6WJ3ONK36QO/


[ovirt-users] Re: oVirt-node 4.4.0 - Hosted engine deployment fails when host is unable to download updates

2020-07-07 Thread shadow emy
Hello,

I experienced the same error.

There can be two problems :
1.Sometimes the ovirt 4.4 repository does not respond , either because is down 
or the internet connection  to it does not work correctly firewall, routing, 
etc.
My solution for this was to clear rpm  packages cache using : "dnf clean all "  
and after do a "dnf update" to get the packages back in cache, to check the 
connections to repository are ok
Then retry the deployment.

2. The disk space might be full and the engine-setup cant download the need 
packages. 
Multiple failed setip runs on hosted-engine --deploy can fill up the disk 
space. Before rerun delete everything in /var/tmp/ directory.

For issue 1. the ovirt developers team can implememt a  failed_when  retry 
mechanism (maybe max 3 retries ?? ) using ansible task in case of uri package 
update error.

Thanks,
Emy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T2FBY3SFS7WK5J5CVAMNXTSMXNVODXUK/


[ovirt-users] Re: Weird problem starting VMs in oVirt-4.4

2020-07-10 Thread shadow emy
I had the same problem when booting vm's in ovirt 4.4.0  .
The legacy  bios could not detect the disk to boot up and yes as suspected was 
a storage problem with gluster.
After upgrade to ovirt 4.4.1  and run again "Optimize for virt store " i dont 
see this boot problem anymore, but maybe is something else ? dont know exactly 
what was fixed


Thanks,
Emy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZSL44L5NTRUEMJVURANE65BBH2PE23FV/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-06-29 Thread shadow emy
Thank you for the information provided.

Yeap MTU is working ok with Jumbo Frames, on all gluster nodes.

In the next days if i have time, I will try to play with ovirt 4.4 and gluster 
7.x vs ovirt 4.4 and NFS to check for performance.
I might try even ceph with ovirt 4.4
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNJ3KF73OVQKIEUMY4FAOEIZWEZYHS4I/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-07-07 Thread shadow emy
Ohh yes is important to know ahead.Not so nice if  they drop drivers.
Fortunately for now my Perc H710 (LSI MegaRAID SAS 2208) is still supported in 
megaraid_sas linux module for RHEL 8.

Upgrade to ovirt 4.4 is really difficult.I had to have downtimes for it to work 
correctly. 
After you deploy a restore from old ovirt 4.3 backup, you cant switch cluster 
comparibility_version from 4.3 to 4.4 using the web interface, it wont let you 
and you will have lots of errors.
I had to hack into the database and change the cluster : compatibility, cpu 
type, cpu flags for it to work correctly.On some vms i had to change also the 
cpu_name in the database.

The problem is 4.3 cpu profiles(they where changed in ovirt 4.4) are  not 
supported in ovirt 4.4. Because of this all your hosts with 4.3 will be in 
NonResponsive state on hosted-engine 4.4 .


If you have gluster like me, is even more difficult.Hosts many times failed to 
activate , because the Storage domains where down.

But finally i mange to upgrade it, though was hard.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZBVNIA5GBFIHBNB56CRW6NHLY6TBICHR/


[ovirt-users] Re: Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE

2020-06-28 Thread shadow emy
> Hello ,

Hello and thank you for the reply.Bellow are the answers to your questions.
> 
> Let me ask some questions:
> 1. What is the scheduler for your PV ?


On the Raid Controller device where the SSD disks are in Raid 0 (device sda) it 
is set to "deadline". But on the lvm volume logical volume dm-7, where the 
logical block is set for "data" volunr it is set to none.(i think this is ok )


[root@host1 ~]# ls -al /dev/mapper/gluster_vg_sd
v_data ter_l
lrwxrwxrwx. 1 root root 7 Jun 28 14:14 /dev/mapper/gluster_v
g_sda3-gluster_lv_data -> ../dm-7
[root@host1 ~]# cat /sys/block/dm-7/queue/scheduler
none
root@host1:~[root@host1 ~]# cat /sys/block/dm-7/queue/schedu
[root@host1 ~]# cat /sys/block/sda/queue/scheduler  
noop [deadline] cfq 



> 2. Have you aligned your PV during the setup 'pvcreate --dataalignment 
> alignment_value
> device'


I did not made other alignment then the default.Bellow are the partitions on 
/dev/sda.
Can i enable partition alignment now, if yes how ?

sfdisk -d /dev/sda
# partition table of /dev/sda
unit: sectors

/dev/sda1 : start= 2048, size=   487424, Id=83, bootable
/dev/sda2 : start=   489472, size= 95731712, Id=8e
/dev/sda3 : start= 96221184, size=3808675840, Id=83
/dev/sda4 : start=0, size=0, Id= 0



> 3. What is your tuned profile ? Do you use rhgs-random-io from
> the ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/red...
> ?

My tuned active profile is virtual-host

Current active profile: virtual-host

 No i dont use any of the rhgs-random-io profiles

> 4. What is the output of "xfs_info /path/to/your/gluster/brick" ?

xfs_info /gluster_bricks/data
meta-data=/dev/mapper/gluster_vg_sda3-gluster_lv_data isize=
512agcount=32, agsize=6553600 blks
 =   sectsz=512   attr=2, projid
32bit=1
 =   crc=1finobt=0 spino
des=0
data =   bsize=4096   blocks=2097152
00, imaxpct=25
 =   sunit=64 swidth=64 blks
naming   =version 2  bsize=8192   ascii-ci=0 fty
pe=1
log  =internal   bsize=4096   blocks=102400,
 version=2
 =   sectsz=512   sunit=64 blks,
 lazy-count=1
realtime =none   extsz=4096   blocks=0, rtex
tents=0

> 5. Are you using Jumbo Frames ? Does your infra support them?
> Usually MTU of 9k is standard, but some switches and NICs support up to 16k.
> 

Unfortunately  I can not enable MTU to 9000 and Jumbo Frames on these Cisco 
SG350X switches to specific ports.The switches  dont suport Jumbo Frames enable 
 to a single port, only on all ports .
I have others devices connected to the switches on the remaining 48 ports that 
have  1Gb/s.

> All the options for "optimize for virt" are located
> at /var/lib/glusterd/groups/virt on each gluster node.

I have already looked  previously at that file, but not all the volume settings 
 that are set by "Optime for Virt Store" are stored there.
For example  "Optimize for Virt Store " sets network.remote.dio   to disable 
and in the glusterd/groups/virt is set to enabled.Or  
cluster.granular-entry-heal: enable is not present there, bit it is set by 
"Optimize for Virt Store"

> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> В неделя, 28 юни 2020 г., 22:13:09 Гринуич+3, jury cat 
>  написа: 
> 
> 
> 
> 
> 
> Hello all,
> 
> I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
> My Gluster setup is of 3 hosts in replica 3 (2 hosts + 1 arbiter).
> All the 3 hosts are Dell R720  with Perc Raid Controller H710 mini(that has 
> maximim
> throughtout 6Gbs)  and  with 2×1TB samsumg SSD in RAID 0. The volume is 
> partitioned using
> LVM thin provision and formated XFS.
> The hosts have separate 10GE network cards for storage traffic.
> The Gluster Network is connected to this 10GE network cards and is mounted 
> using Fuse
> Glusterfs(NFS is disabled).Also Migration Network is activated on the same 
> storage
> network.
> 
>  
> The problem is that the 10GE network is not used at full potential by the 
> Gluster.
> If i do live Migration of Vms i can see speeds of 7GB/s ~ 9GB/s.
> The same network tests using iperf3 reported 9.9GB/s ,  these exluding the 
> network setup
> as a bottleneck(i will not paste all the iperf3 tests here for now).
> I did not enable all the Volume options  from "Optimize for Virt Store", 
> because
> of the bug that cant set volume  cluster.granural-heal to enable(this was 
> fixed in vdsm-4
> 40, but that is working only on Centos 8 with ovirt 4.4 ) .
> i whould be happy to know what are all these "Optimize for Virt Store" 
> options,
> so i can set them manually.
> 
> 
> The speed on the disk inside the host using dd is b etween 1GB/s to 700Mbs.
> 
> 
> [root@host1 ~]# dd if=/dev/zero of=test bs=100M count=40 cou nt=80 
> status=progress
> 8074035200 bytes (8.1 GB) copied, 11.059372 s, 730 MB/s 80+0 records in 80+0 
> records out
> 8388608000 bytes (8.4 GB) copied, 

[ovirt-users] Update OVF disks fails on each gluster 7.6 volumes using ovirt 4.4.1.1

2020-07-29 Thread shadow emy


Failed to update OVF disks 1798e945-5be9-466e-b52d-f7f0a3bb2043, OVF data isn't 
updated on those OVF stores (Data Center Default, Storage Domain 
hosted_storage).

I did not see anything with error in SPM host3 in  /var/log/vdsm/vdsm.log


In /var/log/vdsm/supervdsm.log i see 



MainProcess|jsonrpc/6::DEBUG::2020-07-29 
19:05:05,845::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call 
webhookAdd with 
('http://ovirt-engine.domain.local:80/ovirt-engine/services/glusterevents', 
None) {}
MainProcess|jsonrpc/6::DEBUG::2020-07-29 
19:05:05,845::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-39 /sbin/gluster-eventsapi webhook-add 
http://ovirt-engine.domain.local:80/ovirt-engine/services/glusterevents (cwd 
None)
MainProcess|jsonrpc/6::DEBUG::2020-07-29 
19:05:06,599::commands::98::common.commands::(run) FAILED:  = b'Webhook 
already exists\n';  = 5
MainProcess|jsonrpc/6::ERROR::2020-07-29 
19:05:06,600::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper) Error 
in webhookAdd
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/events.py", line 42, in 
webhookAdd
commands.run(command)
  File "/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line 101, in 
run
raise cmdutils.Error(args, p.returncode, out, err)
vdsm.common.cmdutils.Error: Command ['/sbin/gluster-eventsapi', 'webhook-add', 
'http://ovirt-engine.domain.local:80/ovirt-engine/services/glusterevents'] 
failed with rc=5 out=b'' err=b'Webhook already exists\n'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 95, in 
wrapper
res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/events.py", line 44, in 
webhookAdd
raise ge.GlusterWebhookAddException(rc=e.rc, err=e.err)
vdsm.gluster.exception.GlusterWebhookAddException: Failed to add webhook: rc=5 
out=() err=b'Webhook already exists\n'
MainProcess|jsonrpc/6::DEBUG::2020-07-29 
19:05:10,818::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call 
tasksList with ([],) {}
MainProcess|jsonrpc/6::DEBUG::2020-07-29 
19:05:10,819::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-39 /usr/sbin/gluster --mode=script volume status all tasks --xml 
(cwd None)



SPM message logs :

Jul 29 18:58:22 host3 journal[460535]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf
Jul 29 18:58:31 host3 journal[460535]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf
Jul 29 18:58:42 host3 journal[460535]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf
Jul 29 18:58:52 host3 journal[460535]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf
Jul 29 18:59:03 host3 journal[460535]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf



Hosted-Engine /var/log/ovirt-engine/engine.log

2020-07-29 18:52:40,901+03 WARN  
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-153) 
[1ec2d268-8d00-42ad-9660-0a05362a878b] The message key 
'UpdateOvfStoreForStorageDomain' is missing from 'bundles/ExecutionMessages'
2020-07-29 18:52:40,933+03 INFO  
[org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand]
 (default task-153) [1ec2d268-8d00-42ad-9660-0a05362a878b] Lock Acquired to 
object 
'EngineLock:{exclusiveLocks='[affd38b2-457d-4c9f-9802-1f5fadd7cd34=STORAGE]', 
sharedLocks=''}'
2020-07-29 18:52:40,989+03 INFO  
[org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand]
 (default task-153) [1ec2d268-8d00-42ad-9660-0a05362a878b] Running command: 
UpdateOvfStoreForStorageDomainCommand internal: false. Entities affected :  ID: 
affd38b2-457d-4c9f-9802-1f5fadd7cd34 Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2020-07-29 18:52:41,002+03 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-153) [71d7de10] Before acquiring and wait lock 
'EngineLock:{exclusiveLocks='[15ea58fc-9435-11e9-b093-00163e11a571=OVF_UPDATE]',
 sharedLocks=''}'
2020-07-29 18:52:41,003+03 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-153) [71d7de10] Lock-wait acquired to object 
'EngineLock:{exclusiveLocks='[15ea58fc-9435-11e9-b093-00163e11a571=OVF_UPDATE]',
 sharedLocks=''}'

[ovirt-users] Re: 10GB disk created on glusterfs storage has only 4096B in vm

2020-07-29 Thread shadow emy
I had a similar problem, after i migrate the disk from a "Storage Domain"  to a 
second "Storage Domain"  the disk size was different and i could not start the 
vm anymore.
The Ovirt error was : "Unable to get volume size for domain"
Somehow gluster has some errors when i migrate disks and the disk have 
different size.
What i did was to disable  performance.stat-prefetch for the gluster volume(I 
read it has some bugs if is enabled in gluster 7.x)
Also try to look at Disk Snapshots, see if you have a snapshot that does not 
exist.
In my case i had this snapshot problem after migration  and i had to delete the 
snapshot from the database and set the main image disk id as default in the vm.

Maybe in you case is different.
 
Emy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WO7S2ZFTSRPRY22NX6TCGEYE3BL3YKZX/


[ovirt-users] Re: Engine update error from 4.4.2 to 4.4.3

2020-11-11 Thread shadow emy
Hello

I have updated only the engine first using bellow command and could proceed 
with the update.

dnf update ovirt-engine-setup ovirt-engine-setup-plugin-websocket-proxy 
ovirt-engine-dwh-setup ovirt-engine-dwh-grafana-integration-setup

engine-setup


"  yum update ovirt\*setup\* "   --  did not work and had the same error as you 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIURDUJ67AJOWFZH2JXZCGA2IME3NNKD/


[ovirt-users] Re: Upgrade Problem oVirt engine 4.4.1.8-1.el8 -> 4.4.3

2020-11-11 Thread shadow emy
The way i resolved this,  was to update only specific packages first.
So i am not using  this step  " yum update ovirt\*setup\*   " and use the 
bellow  command  which updates the engine-setup.

dnf update ovirt-engine-setup ovirt-engine-setup-plugin-websocket-proxy 
ovirt-engine-dwh-setup ovirt-engine-dwh-grafana-integration-setup


After the engine-setup gets updated, i can run fine engine-setup and proceed 
with the update.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TBI4D4LNNZYX52N5GNZUIZNX2CPVGYV7/


[ovirt-users] Re: Cluster with Hosted Engine update BIOS Setting Ovirt 4.4.3

2020-11-11 Thread shadow emy
I had the same problem after Update.

What i did was set Chipset/Firmware Type to:  " I440FX Chipset with BIOS  "  
and it works.
Previous setting was "Q35 Chipset with BIOS " , which did not work with 4.4.3 .

I also can not change the Cluster Compatibility to 4.5 and if i want to do 
Spike VNC in any vm`s using  Remote Viewer via Console Tab in Gui , it does not 
open any connections.
Maybe i need to change the Cluster Compatibility for this to work, but i dont 
know how that works.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQ7MYY466ULCM4ULPRVOCFY4LMTMFYNL/


[ovirt-users] Re: Cluster compatibility version 4.5 on oVirt 4.4

2020-11-11 Thread shadow emy

Just to confirm i face similar problem.
Yes i saw that  warning too :  "Upgrade Cluster Compatibility Level"   to 
upgrade the Cluster to version 4.5.
Though when i try to do that there are a lot of errors.
 
In GUI :

Error while executing action: Cannot change Cluster Compatibility Version to 
higher version when there are active Hosts with lower version.
-Please move Host host1, host2, host3  with lower version to maintenance first.

In engine.log :

WARN  [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-163) 
[2c681b74-8666-4f2f-b2e0-6b20e98f417e] Validation of action 'UpdateCluster' 
failed for user admin@internal-authz. Reasons: 
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,$host host1, host2, 
host3,CLUSTER_CANNOT_UPDATE_COMPATIBILITY_VERSION_WITH_LOWER_HOSTS

I did not find any documentation for 4.5 cluster compatibility,  so i like as 
well to understand why is that option present there.
It will be used when ovirt 4.5.x will be released ? 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UGQ6HPT2HTEGEP6GZUZ737SXR4K7TTJ/


[ovirt-users] Re: Engine update error from 4.4.2 to 4.4.3

2020-11-11 Thread shadow emy
Yes after that i run  engine-setup, then  i was able to run " yum update"  on 
the hosted-engine vm without errors.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EM5FQ7CSRVRUDTMM5GGIOJ6PQDH6VI2J/