[ovirt-users] Re: Cannot start ppc64le VM's

2020-06-05 Thread Vinícius Ferrão via Users
Hi Michal

On 5 Jun 2020, at 04:39, Michal Skrivanek 
mailto:michal.skriva...@redhat.com>> wrote:



On 5 Jun 2020, at 08:19, Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:

Hello, I’m trying to run ppc64le VM’s on POWER9 but qemu-kvm fails complaining 
about NUMA issues:

that is not a line you should be looking at, it’s just a harmless warning.
I suppose it’s the other one, about spectre fixes

VM ppc64le.local.versatushpc.com.br 
is down with error. Exit message: internal error: qemu unexpectedly closed the 
monitor: 2020-06-05T06:16:10.716052Z qemu-kvm: warning: CPU(s) not present in 
any NUMA nodes: CPU 4 [core-id: 4], CPU 5 [core-id: 5], CPU 6 [core-id: 6], CPU 
7 [core-id: 7], CPU 8 [core-id: 8], CPU 9 [core-id: 9], CPU 10 [core-id: 10], 
CPU 11 [core-id: 11], CPU 12 [core-id: 12], CPU 13 [core-id: 13], CPU 14 
[core-id: 14], CPU 15 [core-id: 15] 2020-06-05T06:16:10.716067Z qemu-kvm: 
warning: All CPU(s) up to maxcpus should be described in NUMA config, ability 
to start up with partial NUMA mappings is obsoleted and will be removed in 
future 2020-06-05T06:16:11.155924Z qemu-kvm: Requested safe indirect branch 
capability level not supported by kvm, try cap-ibs=fixed-ibs.

Any idea of what’s happening?

I found some links, but I’m not sure if they are related or not:
https://bugzilla.redhat.com/show_bug.cgi?id=1732726
https://bugzilla.redhat.com/show_bug.cgi?id=1592648

yes, they look relevant if that’s the hw you have. We do use 
pseries-rhel7.6.0-sxxm machine type in 4.3 (not in 4.4. that would be the 
preferred solution, to upgrade).
If you don’t care about security you can also modify the machine type per VM 
(or in engine db for all VMs) to "pseries-rhel7.6.0"

I’m using an AC922 machine.

In fact I can boot the VMs with pseries-rhel7.6.0 but not with 
pseries-rhel7.6.0-sxxm; how do you made pseries-rhel7.6.0-sxxm works on 4.3 
release?

# lscpu
Architecture:  ppc64le
Byte Order:Little Endian
CPU(s):128
On-line CPU(s) list:   0-127
Thread(s) per core:4
Core(s) per socket:16
Socket(s): 2
NUMA node(s):  6
Model: 2.2 (pvr 004e 1202)
Model name:POWER9, altivec supported
CPU max MHz:   3800.
CPU min MHz:   2300.
L1d cache: 32K
L1i cache: 32K
L2 cache:  512K
L3 cache:  10240K
NUMA node0 CPU(s): 0-63
NUMA node8 CPU(s): 64-127
NUMA node252 CPU(s):
NUMA node253 CPU(s):
NUMA node254 CPU(s):
NUMA node255 CPU(s):

Thank you for helping out.


Thanks,
michal

Thanks,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVVQDBO2XJYBQN7EUDMM74QZJ2UTLRJ2/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TH36FTKGIQR2WZ5D7KJUYLY46C5GXO7Z/


[ovirt-users] Re: Q: Which types of tests and tools are used?

2020-06-05 Thread Sandro Bonazzola
Il giorno dom 31 mag 2020 alle ore 21:47 Juergen Novak 
ha scritto:

> Hi,
>
> can anybody help me to find some information about test types used in
> the project and tools used?
>
> Particularly interesting would be tools and tests used for the Python
> coding, but also any information about Java would be appreciated.
>
>
> I already scanned the documentation, but I mainly found only information
> about Mocking tools.
>
>
Hi, if you are interested joining the development or the test writing for
the project I would also suggest to join devel mailing list.


> Thank you!
>
> /juergen
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OVHS6QUEGSNLZRKXIKDQFR6PKYKL4CBE/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSAZA25BYNPM6ND2NXY2HYXICUS2BP3V/


[ovirt-users] Re: Ovirt 4.4 Migration assistance needed.

2020-06-05 Thread Strahil Nikolov via Users
Hello Tal, Michal,

What do you think about the plan ?
Anything I have to be careful for ?


Best Regards,
Strahil Nikolov






В петък, 22 май 2020 г., 19:01:42 Гринуич+3, Sandro Bonazzola 
 написа: 







Il giorno gio 21 mag 2020 alle ore 17:08 Strahil Nikolov via Users 
 ha scritto:
> Hello All,
> 
> I  would like to ask  for some  assistance  with  the planing  of the upgrade 
> to 4.4 .
> 
> I have  issues with the  OVN (doesn't work at all),  thus  I would like to 
> start fresh with the HE.
> 
> The plan so far (downtime is not an issue) :
> 
> 1. Reinstall  the nodes one by 1 and  rejoin them in the Gluster  TSP
> 2. Wipe  the HostedEngine's gluster  volume
> 3. Deploy a fresh hosted  engine
> 4. Import the storage  domains (gluster) back to the  engine and import the 
> VMs

+Tal Nisan , +Michal Skrivanek any thoughts on this flow?


 
>  
> Do you see any issues  with the plan ?
> Any problems expected  if the VMs do have snapshots?  What about the storage  
> domain version ?
> 
> Thanks  in Advance.
> 
> Best Regards,
> Strahil Nikolov
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QNPOH55AAAYOX5GX3EN5H5ZMOZHKYELI/
> 


-- 
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA

sbona...@redhat.com   
  


Red Hat respects your work life balance. Therefore there is no need to answer 
this email out of your office hours.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JKLAFKFNI4XZVLWJBXVZTOWFRGKUHNHL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2J64WKCACCYVJX2QT6E6LUSGMYD24GDN/


[ovirt-users] Re: Gluster error in server log.

2020-06-05 Thread Strahil Nikolov via Users
Have you tried restarting the engine ?


Best Regards,
Strahil Nikolov

В петък, 5 юни 2020 г., 11:56:37 Гринуич+3, Krist van Besien 
 написа: 





  


Hello all,

On my ovirt HC cluster I constantly get the following kinds of errors:

From /var/log/ovirt-engine/engine.log

2020-06-05 10:38:36,652+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15) [] 
Could not associate brick 'on1.ws.kri.st:/gluster_bricks/vmstore/vmstore' of 
volume 'dab47af2-16fc-461d-956e-daab00c8489e' with correct network as no 
gluster network found in cluster 'a8a38ffe-a499-11ea-9471-00163e5ffe63'

Normally you get these errors if you forgot to define and assign a storage 
network. However I do have a storage network, have assigned it to the interface 
used by gluster on each host, and have set it as the “gluster” network in the 
default cluster.
So why am I still getting this error?

Krist




  
Vriendelijke Groet | Best Regards | Freundliche Grüße | Cordialement





Krist van Besien


krist.vanbes...@gmail.com





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGKRR6TZCJCZB4GB23JWS4QPIX6Q34T4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45QG4KD5SWWJH6Y5UBF3G33BYQKOUXXF/


[ovirt-users] Re: CentOS Stream support

2020-06-05 Thread Strahil Nikolov via Users
Hi Michael,

Thanks for raising that topic.
I personally believe that the CentOS Stream will be something between Fedora 
and RHEL and thus it won't be as stable as I wish.
Yet on the other side , if this speeds up bug fixing - I am OK for that.

P.S.: I'm still on 4.3, but I was planing to switch to regular CentOS instead 
of Stream.


Best Regards,
Strahil Nikolov






В петък, 5 юни 2020 г., 11:37:16 Гринуич+3, Michal Skrivanek 
 написа: 





Hi all,
we would like to ask about interest in community about oVirt moving to CentOS 
Stream.
There were some requests before but it’s hard to see how many people would 
really like to see that.

With CentOS releases lagging behind RHEL for months it’s interesting to 
consider moving to CentOS Stream as it is much more up to date and allows us to 
fix bugs faster, with less workarounds and overhead for maintaining old code. 
E.g. our current integration tests do not really pass on CentOS 8.1 and we 
can’t really do much about that other than wait for more up to date packages. 
It would also bring us closer to make oVirt run smoothly on RHEL as that is 
also much closer to Stream than it is to outdated CentOS.

So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still 
want oVirt to support CentOS Stream if it means “less support” for regular 
CentOS? 
There are some concerns about Stream being a bit less stable, do you share 
those concerns?

Thank you for your comments,
michal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3B5MJKO7BS2DMQL3XOXPNO4BU3YDL52T/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDUHBY43TCKRRLKFQYDB4BNSJOEGOY2L/


[ovirt-users] Re: Tasks stuck waiting on another after failed storage migration (yet not visible on SPM)

2020-06-05 Thread David Sekne
Hello,

It looks like this was the problem indeed.

I have the migration policy set to post copy (thought this was relevant
only to VM migration and not disk migration) and had
libvirt-4.5.0-23.el7_7.6.x86_64 on the problematic hosts. Restarting the
VDSM after the migration indeed resolved the issue.

This issue only appeared during disk move for me.

I have updated all of the hosts since (libvirt-4.5.0-33.el7_8.1.x86_64) and
have not noticed the issue since.

Thank you again.

Regards,

On Mon, Jun 1, 2020 at 6:53 PM Benny Zlotnik  wrote:

> Sorry for the late reply, but you may have hit this bug[1], I forgot about
> it.
> The bug happens when you live migrate a VM in post-copy mode, vdsm
> stops monitoring the VM's jobs.
> The root cause is an issue in libvirt, so it depends on which libvirt
> version you have
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1774230
>
> On Fri, May 29, 2020 at 3:54 PM David Sekne  wrote:
> >
> > Hello,
> >
> > I tried the live migrate as well and it didn't help (it failed).
> >
> > The VM disks were in a illegal state so I ended up restoring the VM from
> backup (It was least complex solution for my case).
> >
> > Thank you both for the help.
> >
> > Regards,
> >
> > On Thu, May 28, 2020 at 5:01 PM Strahil Nikolov 
> wrote:
> >>
> >> I used  to have a similar issue and when I live migrated  (from 1  host
> to another)  it  automatically completed.
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik <
> bzlot...@redhat.com> написа:
> >> >Sorry, by overloaded I meant in terms of I/O, because this is an
> >> >active layer merge, the active layer
> >> >(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
> >> >(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
> >> >it as the active layer. So if there is constantly additional data
> >> >written to the current active layer, vdsm may have trouble finishing
> >> >the synchronization
> >> >
> >> >
> >> >On Wed, May 27, 2020 at 4:55 PM David Sekne 
> >> >wrote:
> >> >>
> >> >> Hello,
> >> >>
> >> >> Yes, no problem. XML is attached (I ommited the hostname and IP).
> >> >>
> >> >> Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not
> >> >overloaded. We have multiple servers with the same specs with no
> >> >issues.
> >> >>
> >> >> Regards,
> >> >>
> >> >> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik 
> >> >wrote:
> >> >>>
> >> >>> Can you share the VM's xml?
> >> >>> Can be obtained with `virsh -r dumpxml `
> >> >>> Is the VM overloaded? I suspect it has trouble converging
> >> >>>
> >> >>> taskcleaner only cleans up the database, I don't think it will help
> >> >here
> >> >>>
> >> >___
> >> >Users mailing list -- users@ovirt.org
> >> >To unsubscribe send an email to users-le...@ovirt.org
> >> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> >oVirt Code of Conduct:
> >> >https://www.ovirt.org/community/about/community-guidelines/
> >> >List Archives:
> >> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX4QZDIKXH7ETWPDNI3SKZ535WHBXE2V/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3RNQF6HLPIPXVCCLLROG65DS7RDAQJCH/


[ovirt-users] Gluster error in server log.

2020-06-05 Thread Krist van Besien
Hello all,

On my ovirt HC cluster I constantly get the following kinds of errors:

From /var/log/ovirt-engine/engine.log

2020-06-05 10:38:36,652+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15) [] 
Could not associate brick 'on1.ws.kri.st:/gluster_bricks/vmstore/vmstore' of 
volume 'dab47af2-16fc-461d-956e-daab00c8489e' with correct network as no 
gluster network found in cluster 'a8a38ffe-a499-11ea-9471-00163e5ffe63'

Normally you get these errors if you forgot to define and assign a storage 
network. However I do have a storage network, have assigned it to the interface 
used by gluster on each host, and have set it as the “gluster” network in the 
default cluster.
So why am I still getting this error?

Krist

Vriendelijke Groet | Best Regards | Freundliche Grüße | Cordialement
Krist van Besien
krist.vanbes...@gmail.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGKRR6TZCJCZB4GB23JWS4QPIX6Q34T4/


[ovirt-users] Re: [Gluster-users] Re: Single instance scaleup.

2020-06-05 Thread Krist van Besien
Hi all.

I acrtually did something like that myself.

I started out with a single node HC cluster. I then added another node (and 
plan to add a third). This is what I did:

1) Set up the new node. Make sure that you have all dependencies. (In my case I 
started with a Centos 8 machine, and installed vdms-gluster and gluster-ansible)
2) Configure the bricks. For this I just copied over hc_wizard_inventory.yml 
over from the first node, edited it to fit the second node, and just ran the 
gluster.infra role.
3) Expand the volume. In this case with the following command:
gluster volume add-brick engine replica 2 :/gluster_bricks/engine/engine
4) now just add the host as a hypervisor using the management console.

I plan on adding a third node. Then I want to have full replica on the engine, 
and replica 2 + arbiter on the vmstore volume.

Expanding gluster volumes, migrating from distributed to replicated and even 
replacing bricks etc. is rather easy in Gluster once you know how it works. I 
have even replaced all the servers on a live gluster cluster, without service 
interruption…

Krist

On Jul 18, 2019, 09:58 +0200, Leo David , wrote:
> Hi,
> Looks like the only way arround would be to create a brand-new volume as 
> replicated on other disks, and start moving the vms all around the place 
> between volumes ?
> Cheers,
>
> Leo
>
> > On Mon, May 27, 2019 at 1:53 PM Leo David  wrote:
> > > Hi,
> > > Any suggestions ?
> > > Thank you very much !
> > >
> > > Leo
> > >
> > > > On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov  
> > > > wrote:
> > > > > Yeah,
> > > > > it seems different from the docs.
> > > > > I'm adding the gluster users list ,as they are more experienced into 
> > > > > that.
> > > > >
> > > > > @Gluster-users,
> > > > >
> > > > > can you provide some hint how to add aditional replicas to the below 
> > > > > volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type 
> > > > > volumes ?
> > > > >
> > > > >
> > > > > Best Regards,
> > > > > Strahil Nikolov
> > > > >
> > > > > В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David 
> > > > >  написа:
> > > > >
> > > > >
> > > > > Thank you Strahil,
> > > > > The engine and ssd-samsung are distributed...
> > > > > So these are the ones that I need to have replicated accross new 
> > > > > nodes.
> > > > > I am not very sure about the procedure to accomplish this.
> > > > > Thanks,
> > > > >
> > > > > Leo
> > > > >
> > > > > On Sun, May 26, 2019, 13:04 Strahil  wrote:
> > > > > > Hi Leo,
> > > > > > As you do not have a distributed volume , you can easily switch to 
> > > > > > replica 2 arbiter 1 or replica 3 volumes.
> > > > > > You can use the following for adding the bricks:
> > > > > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
> > > > > > Best Regards,
> > > > > > Strahil Nikoliv
> > > > > > On May 26, 2019 10:54, Leo David  wrote:
> > > > > > > Hi Stahil,
> > > > > > > Thank you so much for yout input !
> > > > > > >
> > > > > > >  gluster volume info
> > > > > > >
> > > > > > >
> > > > > > > Volume Name: engine
> > > > > > > Type: Distribute
> > > > > > > Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> > > > > > > Status: Started
> > > > > > > Snapshot Count: 0
> > > > > > > Number of Bricks: 1
> > > > > > > Transport-type: tcp
> > > > > > > Bricks:
> > > > > > > Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> > > > > > > Options Reconfigured:
> > > > > > > nfs.disable: on
> > > > > > > transport.address-family: inet
> > > > > > > storage.owner-uid: 36
> > > > > > > storage.owner-gid: 36
> > > > > > > features.shard: on
> > > > > > > performance.low-prio-threads: 32
> > > > > > > performance.strict-o-direct: off
> > > > > > > network.remote-dio: off
> > > > > > > network.ping-timeout: 30
> > > > > > > user.cifs: off
> > > > > > > performance.quick-read: off
> > > > > > > performance.read-ahead: off
> > > > > > > performance.io-cache: off
> > > > > > > cluster.eager-lock: enable
> > > > > > > Volume Name: ssd-samsung
> > > > > > > Type: Distribute
> > > > > > > Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> > > > > > > Status: Started
> > > > > > > Snapshot Count: 0
> > > > > > > Number of Bricks: 1
> > > > > > > Transport-type: tcp
> > > > > > > Bricks:
> > > > > > > Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> > > > > > > Options Reconfigured:
> > > > > > > cluster.eager-lock: enable
> > > > > > > performance.io-cache: off
> > > > > > > performance.read-ahead: off
> > > > > > > performance.quick-read: off
> > > > > > > user.cifs: off
> > > > > > > network.ping-timeout: 30
> > > > > > > network.remote-dio: off
> > > > > > > performance.strict-o-direct: on
> > > > > > > performance.low-prio-threads: 32
> > > > > > > features.shard: on
> > > > > > > storage.owner-gid: 36
> > > > > > > storage.owner-uid: 36
> > > > > > > transport.address-family: inet
> > > > > > > nfs.disable: on
> > > > > > >
> > > > > > > The other two hosts will be 192.

[ovirt-users] CentOS Stream support

2020-06-05 Thread Michal Skrivanek
Hi all,
we would like to ask about interest in community about oVirt moving to CentOS 
Stream.
There were some requests before but it’s hard to see how many people would 
really like to see that.

With CentOS releases lagging behind RHEL for months it’s interesting to 
consider moving to CentOS Stream as it is much more up to date and allows us to 
fix bugs faster, with less workarounds and overhead for maintaining old code. 
E.g. our current integration tests do not really pass on CentOS 8.1 and we 
can’t really do much about that other than wait for more up to date packages. 
It would also bring us closer to make oVirt run smoothly on RHEL as that is 
also much closer to Stream than it is to outdated CentOS.

So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still 
want oVirt to support CentOS Stream if it means “less support” for regular 
CentOS? 
There are some concerns about Stream being a bit less stable, do you share 
those concerns?

Thank you for your comments,
michal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3B5MJKO7BS2DMQL3XOXPNO4BU3YDL52T/


[ovirt-users] Re: Cannot start ppc64le VM's

2020-06-05 Thread Michal Skrivanek


> On 5 Jun 2020, at 08:19, Vinícius Ferrão via Users  wrote:
> 
> Hello, I’m trying to run ppc64le VM’s on POWER9 but qemu-kvm fails 
> complaining about NUMA issues:

that is not a line you should be looking at, it’s just a harmless warning.
I suppose it’s the other one, about spectre fixes
> 
> VM ppc64le.local.versatushpc.com.br 
>  is down with error. Exit message: 
> internal error: qemu unexpectedly closed the monitor: 
> 2020-06-05T06:16:10.716052Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: CPU 4 [core-id: 4], CPU 5 [core-id: 5], CPU 6 [core-id: 6], CPU 7 
> [core-id: 7], CPU 8 [core-id: 8], CPU 9 [core-id: 9], CPU 10 [core-id: 10], 
> CPU 11 [core-id: 11], CPU 12 [core-id: 12], CPU 13 [core-id: 13], CPU 14 
> [core-id: 14], CPU 15 [core-id: 15] 2020-06-05T06:16:10.716067Z qemu-kvm: 
> warning: All CPU(s) up to maxcpus should be described in NUMA config, ability 
> to start up with partial NUMA mappings is obsoleted and will be removed in 
> future 2020-06-05T06:16:11.155924Z qemu-kvm: Requested safe indirect branch 
> capability level not supported by kvm, try cap-ibs=fixed-ibs.
> 
> Any idea of what’s happening?
> 
> I found some links, but I’m not sure if they are related or not:
> https://bugzilla.redhat.com/show_bug.cgi?id=1732726 
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1592648 
> 
yes, they look relevant if that’s the hw you have. We do use 
pseries-rhel7.6.0-sxxm machine type in 4.3 (not in 4.4. that would be the 
preferred solution, to upgrade).
If you don’t care about security you can also modify the machine type per VM 
(or in engine db for all VMs) to "pseries-rhel7.6.0"

Thanks,
michal
> 
> Thanks,
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVVQDBO2XJYBQN7EUDMM74QZJ2UTLRJ2/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHZGIBF6QPKO7GWYELULQGRZKYLUMLCK/