[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-20 Thread Joop
On 17-4-2020 18:29, Sandro Bonazzola wrote:
>
>
>   oVirt 4.4.0 Beta release refresh is now available for testing
>
>
> The oVirt Project is excited to announce the availability of the beta
> release of oVirt 4.4.0 refresh (beta 4) for testing, as of April 17th,
> 2020
>
>
> This release unleashes an altogether more powerful and flexible open
> source virtualization solution that encompasses hundreds of individual
> changes and a wide range of enhancements across the engine, storage,
> network, user interface, and analytics on top of oVirt 4.3.
>
>
I successfully installed a HCI setup using glusterfs using this beta4. I
encountered the save problem as Gianluca but his workaround worked for
me too.

Keep up the good work,

Joop

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSDDTEKPSJAQBVSDVQULK4WWJMA4KBIO/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-18 Thread Gianluca Cecchi
On Fri, Apr 17, 2020 at 6:40 PM Sandro Bonazzola 
wrote:

> oVirt 4.4.0 Beta release refresh is now available for testing
>
> The oVirt Project is excited to announce the availability of the beta
> release of oVirt 4.4.0 refresh (beta 4) for testing, as of April 17th, 2020
>
> This release unleashes an altogether more powerful and flexible open
> source virtualization solution that encompasses hundreds of individual
> changes and a wide range of enhancements across the engine, storage,
> network, user interface, and analytics on top of oVirt 4.3.
>
>
>
Hello,
I have tried the iso of ovirt node ng to configure a single host HCI
environment.
Using the wizard for single node, at the beginning in the window where you
have to insert the fqdn of the storage node I see a javascript exception (a
red Ooops! in top right of cockpit page tells me that).
Both on Google Chrome and in firefox.
In practice you cannot type anything and in js console of chrome I see:

app.js:43 Uncaught TypeError: Cannot read property 'checked' of null
at n.value (app.js:43)
at changeCallBack (app.js:43)
at Object.b (app.js:27)
at w (app.js:27)
at app.js:27
at S (app.js:27)
at T (app.js:27)
at Array.forEach ()
at _ (app.js:27)
at N (app.js:27)

Sort of workaround is to type a character (you don't see anything), then
click the ipv6 related checkbox just above the input line, and then you see
the first character, then again, alternating the checkbox and a
character you see every time appear the new character typed.
You cannot type the full fqdn in one flow because only the last character
appears then when you click on the checkbox...
This anyway let me continue and type the whole storage hostname and go
ahead with the installation.
I arrive up to the final step "Finish Deployment", but when the host exits
final global maintenance and the engine VM starts, it goes into panic and I
realized that I missed the pass-through cpu flag for the hypervisor vm, so
I'm going to retry.
In the mean time can anyone verify the javascript exception above?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKBLU764LCRHHEXCXL4MYGPIUXJEANXX/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-13 Thread eevans
Correct me if I am wrong, but isn't Ovirt an HA cluster? I assume pacemaker 
would be for specific apps like httpd or database, correct? Gluster is more of 
a distributed file system. I'm just wondering with the performance complaints 
with gluster if there wouldn't be a better solution.

Just thinking out loud. 

Eric Evans
Digital Data Services LLC.
304.660.9080


-Original Message-
From: Uwe Laverenz  
Sent: Monday, April 13, 2020 1:57 PM
To: users@ovirt.org
Subject: [ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available 
for testing

Hi Eric,
Am 13.04.20 um 18:15 schrieb eev...@digitaldatatechs.com:
> I have a question for the developers: Why use gluster? Why not 
> Pacemaker or something with better performance stats?
> 
> Just curious.
> 
> Eric Evans

if I'm not mistaken, these two have different purposes: gluster(fs) is a 
distributed storage software and pacemaker is for resource management of HA 
cluster systems.

regards,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JKX6SNGJZRO4D5HNTHDUBZEURLCDAX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQ5F2ZGBE2462XLXPJMMVKKRZLMUSZIF/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-13 Thread Uwe Laverenz

Hi Eric,
Am 13.04.20 um 18:15 schrieb eev...@digitaldatatechs.com:
I have a question for the developers: Why use gluster? Why not Pacemaker 
or something with better performance stats?


Just curious.

Eric Evans


if I'm not mistaken, these two have different purposes: gluster(fs) is a 
distributed storage software and pacemaker is for resource management of 
HA cluster systems.


regards,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JKX6SNGJZRO4D5HNTHDUBZEURLCDAX/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-13 Thread eevans
I have a question for the developers: Why use gluster? Why not Pacemaker or 
something with better performance stats?

Just curious.

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Joop  
Sent: Monday, April 13, 2020 9:43 AM
To: Satheesaran Sundaramoorthi ; users@ovirt.org
Subject: [ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available 
for testing

 

On 13-4-2020 09:45, Satheesaran Sundaramoorthi wrote:

 

 

On Thu, Apr 9, 2020 at 3:37 PM Sandro Bonazzola mailto:sbona...@redhat.com> > wrote:

 

 

Does the hyperconverged installation now work with glusterfs?
I tested alpha/beta1/2 and the latter won't get past the storage step. 

 

Not yet. There was the issue with glusterfs storage domain and storage domain 
blocksize probe check.

This is addressed with a new ioprocess-1.4.1 package for the bug[1].

 

We are testing currently with this fix and it works well. But hitting yet 
another issue, where Hosted Engine deployment

fails at the very last stage in the deployment. Gluster developer is looking in 
to it, as I respond here.

Soon we will have a working build and I will let you once these issues are 
settled.

 

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1820283 

 

Thanks,

sas

 

 

Thanks for the message. Looking through the logs I saw references to the 
blocksize but since that is contained in the ansible scripts which get 
downloaded to the host I gave up finding the source versions to try to see it I 
could get it going beyond that failing step. 

Anyway, thanks for informing me,

Regards,

Joop


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7K5XCJZ4XFQ5ICQP3R7EZJACWBJ7EHL/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-13 Thread Joop
On 13-4-2020 09:45, Satheesaran Sundaramoorthi wrote:
>
>
> On Thu, Apr 9, 2020 at 3:37 PM Sandro Bonazzola  > wrote:
>
>
>
> Does the hyperconverged installation now work with glusterfs?
> I tested alpha/beta1/2 and the latter won't get past the
> storage step.
>
>
> Not yet. There was the issue with glusterfs storage domain and storage
> domain blocksize probe check.
> This is addressed with a new ioprocess-1.4.1 package for the bug[1].
>
> We are testing currently with this fix and it works well. But hitting
> yet another issue, where Hosted Engine deployment
> fails at the very last stage in the deployment. Gluster developer is
> looking in to it, as I respond here.
> Soon we will have a working build and I will let you once these issues
> are settled.
>
> [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1820283
>
> Thanks,
> sas
>
>
Thanks for the message. Looking through the logs I saw references to the
blocksize but since that is contained in the ansible scripts which get
downloaded to the host I gave up finding the source versions to try to
see it I could get it going beyond that failing step.

Anyway, thanks for informing me,

Regards,

Joop
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/23FN2OJPKM3DTBHHGM6J7IVEFO6Y2XYR/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-09 Thread Martin Perina
On Thu, Apr 9, 2020 at 10:49 AM Sandro Bonazzola 
wrote:

> oVirt 4.4.0 Beta release refresh is now available for testing
>
> The oVirt Project is excited to announce the availability of the beta
> release of oVirt 4.4.0 refresh for testing, as of April 9th, 2020
>
> This release unleashes an altogether more powerful and flexible open
> source virtualization solution that encompasses hundreds of individual
> changes and a wide range of enhancements across the engine, storage,
> network, user interface, and analytics on top of oVirt 4.3.
>
> Important notes before you try it
>
> Please note this is a Beta release.
>
> The oVirt Project makes no guarantees as to its suitability or usefulness.
>
> This pre-release must not to be used in production.
>
> In particular, please note that upgrades from 4.3 and future upgrades from
> this beta to the final 4.4 release from this version are not supported.
>
> Some of the features included in oVirt 4.4.0 Beta require content that
> will be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta
> yet due to some incompatibility in openvswitch package shipped in CentOS
> Virt SIG which requires to rebuild openvswitch on top of CentOS 8.2.
>
> Known Issues
>
>-
>
>ovirt-imageio development is still in progress. In this beta you can’t
>upload images to data domains using the engine web application. You can
>still copy iso images into the deprecated ISO domain for installing VMs or
>upload and download to/from data domains is fully functional via the REST
>API and SDK.
>For uploading and downloading via the SDK, please see:
>  -
>
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
>  -
>
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py
>Both scripts are standalone command line tool, try --help for more
>info.
>
>
> Installation instructions
>
> For the engine: either use appliance or:
>
> - Install CentOS Linux 8 minimal from
> http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
>
> - dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
> - dnf update (reboot if needed)
>
> - dnf module enable -y javapackages-tools pki-deps 389-ds
>

This is not correct, we should use:

  dnf module enable -y javapackages-tools pki-deps postgresql:12

- dnf install ovirt-engine
>
> - engine-setup
>
> For the nodes:
>
> Either use oVirt Node ISO or:
>
> - Install CentOS Linux 8 from
> http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
> ; select minimal installation
>
> - dnf install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
> - dnf update (reboot if needed)
>
> - Attach the host to engine and let it be deployed.
>
> What’s new in oVirt 4.4.0 Beta?
>
>-
>
>Hypervisors based on CentOS Linux 8 (rebuilt from award winning
>RHEL8), for both oVirt Node and standalone CentOS Linux hosts
>-
>
>Easier network management and configuration flexibility with
>NetworkManager
>-
>
>VMs based on a more modern Q35 chipset with legacy seabios and UEFI
>firmware
>-
>
>Support for direct passthrough of local host disks to VMs
>-
>
>Live migration improvements for High Performance guests.
>-
>
>New Windows Guest tools installer based on WiX framework now moved to
>VirtioWin project
>-
>
>Dropped support for cluster level prior to 4.2
>-
>
>Dropped SDK3 support
>-
>
>4K disks support only for file based storage. iSCSI/FC storage do not
>support 4k disks yet.
>-
>
>Exporting a VM to a data domain
>-
>
>Editing of floating disks
>-
>
>Integrating ansible-runner into engine, which allows a more detailed
>monitoring of playbooks executed from engine
>-
>
>Adding/reinstalling hosts are now completely based on Ansible
>-
>
>The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
>should be configured by TripleO instead
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 8.1
>
> * CentOS Linux (or similar) 8.1
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
>
> * Red Hat Enterprise Linux 8.1
>
> * CentOS Linux (or similar) 8.1
>
> * oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
>
> See the release notes [1] for installation instructions and a list of new
> features and bugs fixed.
>
> If you manage more than one oVirt instance, OKD or RDO we also recommend
> to try ManageIQ .
>
> In such a case, please be sure  to take the qc2 image and not the ova
> image.
>
> Notes:
>
> - oVirt Appliance is already available for CentOS Linux 8
>
> - oVirt Node NG is already available for CentOS Linux 8
>
> Additional Resources:
>
> * Read more about the oVirt 4.4.0 release highlights:
> 

[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-07 Thread Dominik Holler
On Mon, Apr 6, 2020 at 9:48 AM Sandro Bonazzola  wrote:

>
>
> Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov <
> hunter86...@yahoo.com> ha scritto:
>
>>
>> Hey Sandro,
>>
>> Can you clarify which CPUs will not be supported  in 4.4 ?
>>
>
> I can give the list of supported CPU according to ovirt-engine code:
>
> select fn_db_add_config_value('ServerCPUList',
> '1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; '
> || '2:Secure Intel Nehalem
> Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; '
> || '4:Secure Intel Westmere
> Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '5:Intel SandyBridge
> Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; '
> || '6:Secure Intel SandyBridge
> Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; '
> || '8:Secure Intel IvyBridge
> Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; '
> || '10:Secure Intel Haswell
> Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; '
> || '12:Secure Intel Broadwell
> Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '13:Intel Skylake Client
> Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; '
> || '14:Secure Intel Skylake Client
> Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '15:Intel Skylake Server
> Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; '
> || '16:Secure Intel Skylake Server
> Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64;
> '
> || '17:Intel Cascadelake Server
> Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64;
> '
> || '18:Secure Intel Cascadelake Server
> Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64;
> '
> || '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; '
> || '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; '
> || '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; '
> || '4:Secure AMD
> EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; '
> || '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; '
> || '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; '
> || '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; '
> || '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; '
> || '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; '
> || '4:IBM z14:sie,model_z14-base:z14-base:s390x;',
> '4.4');
>
>
>
>> Also, does oVirt 4.4 support teaming or it is still staying with bonding.
>> Network Manager was mentioned, but it's not very clear.
>>
>
> +Dominik Holler  can you please reply to this?
>
>

oVirt stays with bonding.
Reasons why oVirt should support teaming are gathered in
*Bug 1351510*  - [RFE]
Support using Team devices instead of bond devices
https://bugzilla.redhat.com/show_bug.cgi?id=1351510



> What is the version of gluster bundled  with 4.4 ?
>>
>
> Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 (
> https://docs.gluster.org/en/latest/release-notes/7.4/)
>
>
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *
> *
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QPOMSOKTZSWZDWMZ4ZXBL3653YIGSHJ5/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-07 Thread Sandro Bonazzola
Il giorno lun 6 apr 2020 alle ore 19:53 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

> On April 6, 2020 10:47:33 AM GMT+03:00, Sandro Bonazzola <
> sbona...@redhat.com> wrote:
> >Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov <
> >hunter86...@yahoo.com> ha scritto:
> >
> >>
> >> Hey Sandro,
> >>
> >> Can you clarify which CPUs will not be supported  in 4.4 ?
> >>
> >
> >I can give the list of supported CPU according to ovirt-engine code:
> >
> >select fn_db_add_config_value('ServerCPUList',
> >'1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; '
> >|| '2:Secure Intel Nehalem
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64;
> >'
> >|| '4:Secure Intel Westmere
>
> >Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '5:Intel SandyBridge
> >Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64;
> >'
> >|| '6:Secure Intel SandyBridge
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; '
> >|| '8:Secure Intel IvyBridge
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; '
> >|| '10:Secure Intel Haswell
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64;
> >'
> >|| '12:Secure Intel Broadwell
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '13:Intel Skylake Client
> >Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; '
> >|| '14:Secure Intel Skylake Client
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '15:Intel Skylake Server
> >Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; '
> >|| '16:Secure Intel Skylake Server
>
> >Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64;
> >'
> >|| '17:Intel Cascadelake Server
>
> >Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64;
> >'
> >|| '18:Secure Intel Cascadelake Server
>
> >Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64;
> >'
> >|| '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; '
> >|| '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; '
> >|| '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; '
> >|| '4:Secure AMD
> >EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; '
> >|| '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; '
> >|| '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; '
> >|| '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; '
> >|| '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; '
> >|| '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; '
> >|| '4:IBM z14:sie,model_z14-base:z14-base:s390x;',
> >'4.4');
> >
> >
> >
> >> Also, does oVirt 4.4 support teaming or it is still staying with
> >bonding.
> >> Network Manager was mentioned, but it's not very clear.
> >>
> >
> >+Dominik Holler  can you please reply to this?
> >
> >
> >> What is the version of gluster bundled  with 4.4 ?
> >>
> >
> >Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 (
> >https://docs.gluster.org/en/latest/release-notes/7.4/)
> >
> >
> >
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >>
>
> Thanks  Sandro,
>
> for your  prompt reply.
>
> When is oVirt 4.4  GA expected ?
> Currently oVirt 4.4  beta doesn't support  migration  to 4.4 GA, which is
> the main reason for my hesitation to switch over.
>
> Sadly, my v4.3 is currently  having storage issues (can't activate my
> storage domains) and I am considering to switch to 4.4 beta or power off
> the lab. The main question for me would be 'the time  left' till  GA.
>

The short answer is: 4.4 will GA as soon as it will be ready. To elaborate,
we are still missing to finish the work on ovirt-imageio, once it will be
in, we'll switch to RC phase.
If no critical blockers will show up, we'll release 4.4.0 GA in a short
loop.




>
> Best Regards,
> Strahil Nikolov
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-06 Thread Strahil Nikolov
On April 6, 2020 10:47:33 AM GMT+03:00, Sandro Bonazzola  
wrote:
>Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov <
>hunter86...@yahoo.com> ha scritto:
>
>>
>> Hey Sandro,
>>
>> Can you clarify which CPUs will not be supported  in 4.4 ?
>>
>
>I can give the list of supported CPU according to ovirt-engine code:
>
>select fn_db_add_config_value('ServerCPUList',
>'1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; '
>|| '2:Secure Intel Nehalem
>Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64;
>'
>|| '4:Secure Intel Westmere
>Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '5:Intel SandyBridge
>Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64;
>'
>|| '6:Secure Intel SandyBridge
>Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; '
>|| '8:Secure Intel IvyBridge
>Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; '
>|| '10:Secure Intel Haswell
>Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64;
>'
>|| '12:Secure Intel Broadwell
>Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '13:Intel Skylake Client
>Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; '
>|| '14:Secure Intel Skylake Client
>Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '15:Intel Skylake Server
>Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; '
>|| '16:Secure Intel Skylake Server
>Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64;
>'
>|| '17:Intel Cascadelake Server
>Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64;
>'
>|| '18:Secure Intel Cascadelake Server
>Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64;
>'
>|| '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; '
>|| '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; '
>|| '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; '
>|| '4:Secure AMD
>EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; '
>|| '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; '
>|| '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; '
>|| '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; '
>|| '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; '
>|| '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; '
>|| '4:IBM z14:sie,model_z14-base:z14-base:s390x;',
>'4.4');
>
>
>
>> Also, does oVirt 4.4 support teaming or it is still staying with
>bonding.
>> Network Manager was mentioned, but it's not very clear.
>>
>
>+Dominik Holler  can you please reply to this?
>
>
>> What is the version of gluster bundled  with 4.4 ?
>>
>
>Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 (
>https://docs.gluster.org/en/latest/release-notes/7.4/)
>
>
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>

Thanks  Sandro,

for your  prompt reply.

When is oVirt 4.4  GA expected ?
Currently oVirt 4.4  beta doesn't support  migration  to 4.4 GA, which is the 
main reason for my hesitation to switch over.

Sadly, my v4.3 is currently  having storage issues (can't activate my storage 
domains) and I am considering to switch to 4.4 beta or power off the lab. The 
main question for me would be 'the time  left' till  GA.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHEUUYPHGFFYJ4O35RDYIJRTFCGRPWVC/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-06 Thread Sandro Bonazzola
Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

>
> Hey Sandro,
>
> Can you clarify which CPUs will not be supported  in 4.4 ?
>

I can give the list of supported CPU according to ovirt-engine code:

select fn_db_add_config_value('ServerCPUList',
'1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; '
|| '2:Secure Intel Nehalem
Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; '
|| '4:Secure Intel Westmere
Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64;
'
|| '6:Secure Intel SandyBridge
Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; '
|| '8:Secure Intel IvyBridge
Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; '
|| '10:Secure Intel Haswell
Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; '
|| '12:Secure Intel Broadwell
Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '13:Intel Skylake Client
Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; '
|| '14:Secure Intel Skylake Client
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '15:Intel Skylake Server
Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; '
|| '16:Secure Intel Skylake Server
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64;
'
|| '17:Intel Cascadelake Server
Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64;
'
|| '18:Secure Intel Cascadelake Server
Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64;
'
|| '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; '
|| '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; '
|| '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; '
|| '4:Secure AMD
EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; '
|| '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; '
|| '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; '
|| '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; '
|| '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; '
|| '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; '
|| '4:IBM z14:sie,model_z14-base:z14-base:s390x;',
'4.4');



> Also, does oVirt 4.4 support teaming or it is still staying with bonding.
> Network Manager was mentioned, but it's not very clear.
>

+Dominik Holler  can you please reply to this?


> What is the version of gluster bundled  with 4.4 ?
>

Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 (
https://docs.gluster.org/en/latest/release-notes/7.4/)



>
> Best Regards,
> Strahil Nikolov
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEMYK54JV2XIT4I2KAN4GY24WU7RUBNR/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-05 Thread Strahil Nikolov
On April 3, 2020 5:19:35 PM GMT+03:00, Sandro Bonazzola  
wrote:
>oVirt 4.4.0 Beta release refresh is now available for testing
>
>The oVirt Project is excited to announce the availability of the beta
>release of oVirt 4.4.0 refresh for testing, as of April 3rd, 2020
>
>This release unleashes an altogether more powerful and flexible open
>source
>virtualization solution that encompasses hundreds of individual changes
>and
>a wide range of enhancements across the engine, storage, network, user
>interface, and analytics on top of oVirt 4.3.
>
>Important notes before you try it
>
>Please note this is a Beta release.
>
>The oVirt Project makes no guarantees as to its suitability or
>usefulness.
>
>This pre-release must not be used in production.
>
>In particular, please note that upgrades from 4.3 and future upgrades
>from
>this beta to the final 4.4 release from this version are not supported.
>
>Some of the features included in oVirt 4.4.0 Beta require content that
>will
>be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta
>yet
>due to some incompatibility in openvswitch package shipped in CentOS
>Virt
>SIG which requires to rebuild openvswitch on top of CentOS 8.2.
>
>Known Issues
>
>   -
>
> ovirt-imageio development is still in progress. In this beta you can’t
>upload images to data domains using the engine web application. You can
>still copy iso images into the deprecated ISO domain for installing VMs
>or
>upload and download to/from data domains is fully functional via the
>REST
>   API and SDK.
>   For uploading and downloading via the SDK, please see:
> -
>https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
> -
>https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py
>Both scripts are standalone command line tools, try --help for more
>info.
>
>
>Installation instructions
>
>For the engine: either use appliance or:
>
>- Install CentOS Linux 8 minimal from
>http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
>
>- dnf install
>https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
>- dnf update (reboot if needed)
>
>- dnf module enable -y javapackages-tools pki-deps 389-ds
>
>- dnf install ovirt-engine
>
>- engine-setup
>
>For the nodes:
>
>Either use oVirt Node ISO or:
>
>- Install CentOS Linux 8 from
>http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
>; select minimal installation
>
>- dnf install
>https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
>
>- dnf update (reboot if needed)
>
>- Attach the host to engine and let it be deployed.
>
>What’s new in oVirt 4.4.0 Beta?
>
>   -
>
>Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
>   for both oVirt Node and standalone CentOS Linux hosts
>   -
>
>   Easier network management and configuration flexibility with
>   NetworkManager
>   -
>
>   VMs based on a more modern Q35 chipset with legacy seabios and UEFI
>   firmware
>   -
>
>   Support for direct passthrough of local host disks to VMs
>   -
>
>   Live migration improvements for High Performance guests.
>   -
>
>  New Windows Guest tools installer based on WiX framework now moved to
>   VirtioWin project
>   -
>
>   Dropped support for cluster level prior to 4.2
>   -
>
>   Dropped SDK3 support
>   -
>
>  4K disks support only for file based storage. iSCSI/FC storage do not
>   support 4k disks yet.
>   -
>
>   Exporting a VM to a data domain
>   -
>
>   Editing of floating disks
>   -
>
>   Integrating ansible-runner into engine, which allows a more detailed
>   monitoring of playbooks executed from engine
>   -
>
>   Adding/reinstalling hosts are now completely based on Ansible
>   -
>
>  The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
>   should be configured by TripleO instead
>
>
>This release is available now on x86_64 architecture for:
>
>* Red Hat Enterprise Linux 8.1
>
>* CentOS Linux (or similar) 8.1
>
>This release supports Hypervisor Hosts on x86_64 and ppc64le
>architectures
>for:
>
>* Red Hat Enterprise Linux 8.1
>
>* CentOS Linux (or similar) 8.1
>
>* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
>
>See the release notes [1] for installation instructions and a list of
>new
>features and bugs fixed.
>
>If you manage more than one oVirt instance, OKD or RDO we also
>recommend to
>try ManageIQ .
>
>In such a case, please be sure  to take the qc2 image and not the ova
>image.
>
>Notes:
>
>- oVirt Appliance is already available for CentOS Linux 8
>
>- oVirt Node NG is already available for CentOS Linux 8
>
>Additional Resources:
>
>* Read more about the oVirt 4.4.0 release highlights:
>http://www.ovirt.org/release/4.4.0/
>
>* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
>
>* Check out the latest project news on the oVirt blog:
>http://www.ovirt.org/blog/
>
>
>[1]