[ovirt-users] Problem with oVirt 4.4

2020-06-01 Thread minnie . du
Hello,

We have met a problem when testing oVirt 4.4.

Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we 
created snapshot 1 and then snapshot 2, but after clicking the delete button of 
snapshot 1, snapshot 1 failed to be deleted and the state of corresponding disk 
became illegal. Removing the snapshot in this state requires a lot of risky 
work in the background, leading to the inability to free up snapshot space. 
Long-term backups will cause the target VM to create a large number of 
unrecoverable snapshots, thus taking up a large amount of production storage. 
So we need your help.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RAIZKFIN6R4B7JQYYSKWR7HXPJOODP42/


[ovirt-users] Problems creating bricks for new Gluster storage 4.4

2020-06-01 Thread Jaret Garcia via Users

Hi guys I'm trying to deploy a new ovirt environmet based in version 4.4 so I 
set up the following: 

1 engine sever (stand alone) Version : 4.4.0.3 

3 host servers (32 GB RAM, 2CPUs 8 core Xeon 5160, 500GB OS drive, 1 storage 
8TB drive and 1 SSD 256GB (I'm trying to create a brick with SSD cache with no 
success) 

Detail of versions running on servers 
[ 
OS Version: 
RHEL - 8 - 1.1911.0.9.el8 
OS Description: 
oVirt Node 4.4.0 
Kernel Version: 
4.18.0 - 147.8.1.el8_1.x86_64 
KVM Version: 
4.1.0 - 23.el8.1 
LIBVIRT Version: 
libvirt-5.6.0-10.el8 
VDSM Version: 
vdsm-4.40.16-1.el8 
SPICE Version: 
0.14.2 - 1.el8 
GlusterFS Version: 
glusterfs-7.5-1.el8 
CEPH Version: 
librbd1-12.2.7-9.el8 
Open vSwitch Version: 
openvswitch-2.11.1-5.el8 
Nmstate Version: 
nmstate-0.2.10-1.el8 
] 

Engine server working fine, all 3 host servers already part of the default 
cluster as hypervisors, then when I try to create a brick in any of the 
servers, using the 8TB HHDD and SSD as cache the process fails, in the GUI it 
just says "failed to creat brick on host" 

However in engine log as well as in brick-setup log I see: "err" : " Physical 
volume \"/dev/sdc\" still in use\n" "failed" : true,, (sdc in this case is the 
8TB HHDD) 

Attached files of both logs. 

Thanks in advance, 


Jaret Garcia 

PACKET 
Next Generation IT & Telecom 
Tel. [ callto:+52%20%2855%29%2059898707 | +52 (55) 59898707 ] 
Of. [ callto:+52%20%2855%29%2047441200 | +52 (55) 47441200 ] Ext. 1210 
email: [ mailto:jaret.gar...@packet.mx | jaret.gar...@packet.mx ] 

[ http://www.packet.mx/ | www.packet.mx ] 
2020-06-01 16:44:23,209-05 INFO  [org.ovirt.engine.core.bll.gluster.CreateBrickCommand] (default task-65) [aa50e9bf-ab1c-42da-9e8c-760262a33b70] Lock Acquired to object 'EngineLock:{exclusiveLocks='[9b58ebc6-fe0d-4b9c-ad0a-a8f3999ada03=HOST_STORAGE_DEVICES]', sharedLocks=''}'
2020-06-01 16:44:23,225-05 INFO  [org.ovirt.engine.core.bll.gluster.CreateBrickCommand] (default task-65) [aa50e9bf-ab1c-42da-9e8c-760262a33b70] Running command: CreateBrickCommand internal: false. Entities affected :  ID: a45f92bf-a0ea-4398-aef8-2e0d73096a9d Type: VDSAction group MANIPULATE_HOST with role type ADMIN
2020-06-01 16:44:23,625-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (default task-68) [b4dbd2c2-a926-448f-bacc-dcd1ddb3070b] START, GlusterServersListVDSCommand(HostName = hypervisor1.ovirt2.packet.mx, VdsIdVDSCommandParametersBase:{hostId='a45f92bf-a0ea-4398-aef8-2e0d73096a9d'}), log id: 51cde49
2020-06-01 16:44:23,913-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (default task-68) [b4dbd2c2-a926-448f-bacc-dcd1ddb3070b] FINISH, GlusterServersListVDSCommand, return: [172.25.209.11/24:CONNECTED, hypervisor3.ovirt2.packet.mx:CONNECTED, hypervisor2.ovirt2.packet.mx:CONNECTED], log id: 51cde49
2020-06-01 16:44:25,599-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70) [] START, GlusterServersListVDSCommand(HostName = hypervisor1.ovirt2.packet.mx, VdsIdVDSCommandParametersBase:{hostId='a45f92bf-a0ea-4398-aef8-2e0d73096a9d'}), log id: 20779981
2020-06-01 16:44:25,886-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70) [] FINISH, GlusterServersListVDSCommand, return: [172.25.209.11/24:CONNECTED, hypervisor3.ovirt2.packet.mx:CONNECTED, hypervisor2.ovirt2.packet.mx:CONNECTED], log id: 20779981
2020-06-01 16:44:25,890-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70) [] START, GlusterVolumesListVDSCommand(HostName = hypervisor1.ovirt2.packet.mx, GlusterVolumesListVDSParameters:{hostId='a45f92bf-a0ea-4398-aef8-2e0d73096a9d'}), log id: 3c53c735
2020-06-01 16:44:26,042-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70) [] FINISH, GlusterVolumesListVDSCommand, return: {}, log id: 3c53c735
2020-06-01 16:44:28,820-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (default task-68) [6f22b600-c3c5-4d29-a973-a3fc77eaba79] START, GlusterServersListVDSCommand(HostName = hypervisor1.ovirt2.packet.mx, VdsIdVDSCommandParametersBase:{hostId='a45f92bf-a0ea-4398-aef8-2e0d73096a9d'}), log id: 398d9c3d
2020-06-01 16:44:29,102-05 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (default task-68) [6f22b600-c3c5-4d29-a973-a3fc77eaba79] FINISH, GlusterServersListVDSCommand, return: [172.25.209.11/24:CONNECTED, hypervisor3.ovirt2.packet.mx:CONNECTED, hypervisor2.ovirt2.packet.mx:CONNECTED], log id: 398d9c3d
2020-06-01 16:44:32,283-05 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-65) [aa50e9bf-ab1c-42da-9e8c-760262a33b70] 

[ovirt-users] Re: Mixing OS versions

2020-06-01 Thread Stack Korora
On 2020-06-01 16:31, Sandro Bonazzola wrote:
>
>
> Il giorno lun 1 giu 2020 alle ore 17:52 Stack Korora
> mailto:stackkor...@disroot.org>> ha scritto:
>
> Greetings,
> We've been using Scientific Linux 7 quite successfully with oVirt for
> years now. However, since there will not be a SL8 we are transitioning
> new servers to CentOS8. I would like to add a new oVirt hypervisor
> node.
>
> How bad of an idea is it to have a 8 system when the rest are 7 even
> though the version of oVirt will be the same?
>
>
> Please note the oVirt version can't be the same on el7 and el8 because
> hosts on el8 are supported only by oVirt 4.4 and oVirt 4.4 is not
> available on el7.
> You can upgrade the engine to 4.4 and then add el8 hosts while still
> keeping el7 hosts until you finish the upgrade.


Thank you for the clarification! I appreciate it.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KIXLZWRJDRKAND2AZSQZYW6TYB2CUHXZ/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-06-01 Thread Benny Zlotnik
Sorry for the late reply, but you may have hit this bug[1], I forgot about it.
The bug happens when you live migrate a VM in post-copy mode, vdsm
stops monitoring the VM's jobs.
The root cause is an issue in libvirt, so it depends on which libvirt
version you have

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1774230

On Fri, May 29, 2020 at 3:54 PM David Sekne  wrote:
>
> Hello,
>
> I tried the live migrate as well and it didn't help (it failed).
>
> The VM disks were in a illegal state so I ended up restoring the VM from 
> backup (It was least complex solution for my case).
>
> Thank you both for the help.
>
> Regards,
>
> On Thu, May 28, 2020 at 5:01 PM Strahil Nikolov  wrote:
>>
>> I used  to have a similar issue and when I live migrated  (from 1  host to 
>> another)  it  automatically completed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik  
>> написа:
>> >Sorry, by overloaded I meant in terms of I/O, because this is an
>> >active layer merge, the active layer
>> >(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
>> >(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
>> >it as the active layer. So if there is constantly additional data
>> >written to the current active layer, vdsm may have trouble finishing
>> >the synchronization
>> >
>> >
>> >On Wed, May 27, 2020 at 4:55 PM David Sekne 
>> >wrote:
>> >>
>> >> Hello,
>> >>
>> >> Yes, no problem. XML is attached (I ommited the hostname and IP).
>> >>
>> >> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not
>> >overloaded. We have multiple servers with the same specs with no
>> >issues.
>> >>
>> >> Regards,
>> >>
>> >> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik 
>> >wrote:
>> >>>
>> >>> Can you share the VM's xml?
>> >>> Can be obtained with `virsh -r dumpxml `
>> >>> Is the VM overloaded? I suspect it has trouble converging
>> >>>
>> >>> taskcleaner only cleans up the database, I don't think it will help
>> >here
>> >>>
>> >___
>> >Users mailing list -- users@ovirt.org
>> >To unsubscribe send an email to users-le...@ovirt.org
>> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQWZXFW622OIZLB27AHULO52CWYTVL2S/


[ovirt-users] Re: Mixing OS versions

2020-06-01 Thread Sandro Bonazzola
Il giorno lun 1 giu 2020 alle ore 17:52 Stack Korora <
stackkor...@disroot.org> ha scritto:

> Greetings,
> We've been using Scientific Linux 7 quite successfully with oVirt for
> years now. However, since there will not be a SL7 we are transitioning
> new servers to CentOS8. I would like to add a new oVirt hypervisor node.
>
> How bad of an idea is it to have a 8 system when the rest are 7 even
> though the version of oVirt will be the same?
>

Please note the oVirt version can't be the same on el7 and el8 because
hosts on el8 are supported only by oVirt 4.4 and oVirt 4.4 is not available
on el7.
You can upgrade the engine to 4.4 and then add el8 hosts while still
keeping el7 hosts until you finish the upgrade.

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S2TLUTBBGUA3TFXUXRW6QZM4GLPYU2O5/


[ovirt-users] Mixing OS versions

2020-06-01 Thread Stack Korora
Greetings,
We've been using Scientific Linux 7 quite successfully with oVirt for
years now. However, since there will not be a SL7 we are transitioning
new servers to CentOS8. I would like to add a new oVirt hypervisor node.

How bad of an idea is it to have a 8 system when the rest are 7 even
though the version of oVirt will be the same?

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KH5JK3CVNVTNUHRII2KO2VM6LAUHOBTJ/


[ovirt-users] Re: Mount options

2020-06-01 Thread Eyal Shenitzky
Hi Tommaso,

In order to update any attribute of a storage domain that related to his
connection to oVirt, you must put the storage domain in maintenance.
Note that you can move the disks to a different storage domain while the VM
is running and avoid powering-off the VMs.


On Mon, 1 Jun 2020 at 15:37, Tommaso - Shellrent via Users 
wrote:

> Hi to all.
>
> there is a way to change the mount options of a running storage
> domain with gluster, without set all to maintenance and shoutdown the vm on
> it!?
>
> Regards,
> --
> --
> [image: Shellrent - Il primo hosting italiano Security First]
> *Tommaso De Marchi*
> *COO - Chief Operating Officer*
> Shellrent Srl
> Via dell'Edilizia, 19 - 36100 Vicenza
> Tel. 0444321155 <+390444321155> | Fax 04441492177
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3EQRIJVFBC5CQDHYLQA7AGA2EMFUC24/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MBGJ2BBHYAOPJIVDDTOYNOO4NDLTUKDL/


[ovirt-users] Re: ovirt imageio problem...

2020-06-01 Thread Vojtech Juranek
> I tried the same to perform the upload ignoring the error, the ISO was in
> paused state,

have you imported the certificates? If you click on test connection button in 
upload dialogue, does it work?

If so, please file a bug and attach vdsm logs and imageio logs (/var/log/
ovirt-imageio/daemon.log) both from engine and host.

> I clicked to delete it. The result? iso loaded! How is it
> possible?

this looks like a bug, if you click on delete, it should delete the image

signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FOWFP7QAQMBC6LA44VUCSRBU64OSAHI2/


[ovirt-users] Re: ovirt imageio problem...

2020-06-01 Thread Vojtech Juranek
On sobota 30. května 2020 19:59:40 CEST matteo fedeli wrote:
> Hi! I' installed CentOS 8 and ovirt package following this step:
> 
> systemctl enable --now cockpit.socket
> yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
> yum module -y enable javapackages-tools
> yum module -y enable pki-deps
> yum module -y enable postgresql:12
> yum -y install glibc-locale-source glibc-langpack-en
> localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
> yum update
> yum install ovirt-engine
> engine-setup (by keeping all default)
> 
> It's possible ovirt-imageio-proxy service is not installed? 

yes, imageio-proxy was replaced by imageio-daemon is not installed any more. 
Daemon now takes all proxy responsibilities.

signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4HJDFJUHWSZXTGWGNHH3PDMK3MA65Z7O/


[ovirt-users] Mount options

2020-06-01 Thread Tommaso - Shellrent via Users

Hi to all.

        there is a way to change the mount options of a running storage 
domain with gluster, without set all to maintenance and shoutdown the vm 
on it!?


Regards,

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3EQRIJVFBC5CQDHYLQA7AGA2EMFUC24/


[ovirt-users] Re: Issues deploying 4.4 with HE on new EPYC hosts

2020-06-01 Thread Michal Skrivanek
I believe your problem is similar to Mark’s, you need qemu-kvm’s virtualized 
tsx-ctrl that was added to qemu 4.2 upstream and (I think) also the 
corresponding kernel-4.18.0-147.4.1.el8_1
can you confirm your versions?

all of these issues should be addressed in 8.2, we’re just still waiting for 
that:/


> On 28 May 2020, at 23:52, Gianluca Cecchi  wrote:
> 
> On Thu, May 28, 2020 at 3:09 PM Gianluca Cecchi  > wrote:
> 
> [snip] 
> 
> 
> for the cluster type in the mean time I was able to change it to "Intel 
> Cascadelake Server Family" from web admin gui and now I have to try these 
> steps and see if engine starts automatically without manual operations
> 
> 1) set global maintenance
> 2) shutdown engine
> 3) exit maintenance
> 4) see if the engine vm starts without the cpu flag
> 
> 
> I confirm that point 4) was successful and engine vm was able to autostart, 
> after changing cluster type.
> I'm also able to connect to its console from web admin gui
> 
> The command line generated now is:
> 
> qemu 29450 1 43 23:38 ?00:03:09 /usr/libexec/qemu-kvm -name 
> guest=HostedEngine,debug-threads=on -S -object 
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-10-HostedEngine/master-key.aes
>  -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off -cpu 
> Cascadelake-Server,hle=off,rtm=off,arch-capabilities=on -m 
> size=16777216k,slots=16,maxmem=67108864k -overcommit mem-lock=off -smp 
> 2,maxcpus=32,sockets=16,cores=2,threads=1 -object iothread,id=iothread1 -numa 
> node,nodeid=0,cpus=0-31,mem=16384 -uuid b572d924-b278-41c7-a9da-52c4f590aac1 
> -smbios 
> type=1,manufacturer=oVirt,product=RHEL,version=8-1.1911.0.9.el8,serial=d584e962-5461-4fa5-affa-db413e17590c,uuid=b572d924-b278-41c7-a9da-52c4f590aac1,family=oVirt
>  -no-user-config -nodefaults -device sga -chardev 
> socket,id=charmonitor,fd=40,server,nowait -mon 
> chardev=charmonitor,id=monitor,mode=control -rtc 
> base=2020-05-28T21:38:21,driftfix=slew -global kvm-pit.lost_tick_policy=delay 
> -no-hpet -no-reboot -global ICH9-LPC.disable_s3=1 -global 
> ICH9-LPC.disable_s4=1 -boot strict=on -device 
> pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2
>  -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 
> -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 
> -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 
> -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 
> -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 
> -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 
> -device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 
> -device 
> pcie-root-port,port=0x18,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3
>  -device 
> pcie-root-port,port=0x19,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1 -device 
> pcie-root-port,port=0x1a,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2 -device 
> pcie-root-port,port=0x1b,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3 -device 
> pcie-root-port,port=0x1c,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4 -device 
> pcie-root-port,port=0x1d,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5 -device 
> pcie-root-port,port=0x1e,chassis=15,id=pci.15,bus=pcie.0,addr=0x3.0x6 -device 
> pcie-root-port,port=0x1f,chassis=16,id=pci.16,bus=pcie.0,addr=0x3.0x7 -device 
> pcie-root-port,port=0x20,chassis=17,id=pci.17,bus=pcie.0,addr=0x4 -device 
> pcie-pci-bridge,id=pci.18,bus=pci.1,addr=0x0 -device 
> qemu-xhci,p2=8,p3=8,id=ua-b630a65c-8156-4542-b8e8-98b4d2c48f67,bus=pci.4,addr=0x0
>  -device 
> virtio-scsi-pci,iothread=iothread1,id=ua-b7696ce2-fd8c-4856-8c38-197fc520271b,bus=pci.5,addr=0x0
>  -device 
> virtio-serial-pci,id=ua-608f9599-30b2-4ee6-a0d3-d5fb588583ad,max_ports=16,bus=pci.3,addr=0x0
>  -drive if=none,id=drive-ua-fa671f6c-dc42-4c59-a66d-ccfa3d5d422b,readonly=on 
> -device 
> ide-cd,bus=ide.2,drive=drive-ua-fa671f6c-dc42-4c59-a66d-ccfa3d5d422b,id=ua-fa671f6c-dc42-4c59-a66d-ccfa3d5d422b,werror=report,rerror=report
>  -drive 
> file=/var/run/vdsm/storage/3df8f6d4-d572-4d2b-9ab2-8abc456a396f/df02bff9-2c4b-4e14-a0a3-591a84ccaed9/bf435645-2999-4fb2-8d0e-5becab5cf389,format=raw,if=none,id=drive-ua-df02bff9-2c4b-4e14-a0a3-591a84ccaed9,cache=none,aio=threads
>  -device 
> virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.6,addr=0x0,drive=drive-ua-df02bff9-2c4b-4e14-a0a3-591a84ccaed9,id=ua-df02bff9-2c4b-4e14-a0a3-591a84ccaed9,bootindex=1,write-cache=on,serial=df02bff9-2c4b-4e14-a0a3-591a84ccaed9,werror=stop,rerror=stop
>  -netdev 
> tap,fds=43:44,id=hostua-b29ca99f-a53e-4de7-8655-b65ef4ba5dc4,vhost=on,vhostfds=45:46
>  -device 
> virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-b29ca99f-a53e-4de7-8655-b65ef4ba5dc4,id=ua-b29ca99f-a53e-4de7-8655-b65ef4ba5dc4,mac=00:16:3e:0a:96:80,bus=pci.2,addr=0x0
>  -chardev 

[ovirt-users] Re: Issues deploying 4.4 with HE on new EPYC hosts

2020-06-01 Thread Michal Skrivanek


> On 28 May 2020, at 16:26, Mark R  wrote:
> 
> I should have also mentioned that I found an existing report for the issue 
> I'm having on RedHat's Bugzilla:  
> https://bugzilla.redhat.com/show_bug.cgi?id=1783180

And that is the one behind the problem
In  el8 this has been fixed by 
https://bugzilla.redhat.com/show_bug.cgi?id=1797092
kernel-4.18.0-147.8.el8. That is only in 8.2 (or probably CentOS Stream, but ew 
don’t have a CI or build system set up for it)
It’s probably easier to just update only the kernel from CentOS Stream and it 
may work good enough

We still use virt-ssbd in Secure type because it’s supposed to give you a 
mitigation in any case. You can always choose the insecure EPYC variant which 
doesn’t enable any of that. But you have to choose that explicitly

Thanks,
michal

> 
> Mark
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDIH7L3QLX6TPER6IY2KTUSMWGVIGP6W/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NOYPDQGQBKVWOIY6AVVXJVTN4FQ7HOKM/


[ovirt-users] Re: AutoStart VMs (was Re: Re: oVirt 4.4.0 Release is now generally available)

2020-06-01 Thread Yedidyah Bar David
On Thu, May 28, 2020 at 5:01 AM Derek Atkins  wrote:
>
> Hi,
>
> On Wed, May 27, 2020 5:38 pm, Gianluca Cecchi wrote:
> [snip]
> > But you hated Python, didn't you? ;-)
>
> I do.  Can't stand it.  Doesn't mean I can't read it and/or write it, but
> I have to hold my nose doing it.  Syntactic white space?  Eww.  But Python
> is already installed and used and, apparently, supported..  And when I
> looked at the examples I found that 90% of what I needed to do was already
> implemented, so it turned out to be much easier than expected.

Actually there are SDKs for other languages:

https://gerrit.ovirt.org/#/admin/projects/?filter=sdk

https://github.com/oVirt?q=sdk==

JS is empty, but the others are more-or-less alive. Python is indeed
the most "invested", at least in terms of number of example scripts,
but IIUC all of them are generated, so should be complete. Didn't try
to use any of them myself, though, other than python.

>
> > I downloaded your files, even if I'm far from knowing python
>
> It's pretty much a direct translation of my bash script around
> ovirt-shell.  It does have one feature that the old code didn't, which is
> the ability to wait for ovirt to declare that a vm is actually "up".
>
> > try the ansible playbook that gives you more flexibility in my opinion
>
> I've never even installed ansible, let alone tried to use it.  I don't
> need flexibility, I need the job to get done.  But I'll take a look when I
> get the chance.  Thanks!
>
> > Gianluca
>
> -derek
>
> PS: you (meaning whomever is "in charge" is welcome to add my script(s) to
> the examples repo if you feel other people would benefit from seeing it
> there.

You are most welcome to push it yourself:

https://www.ovirt.org/develop/dev-process/working-with-gerrit.html

Thanks!

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EBTH4LAX47IKJ3YCCASOU6P267STVYTU/