[ovirt-users] Re: [EXT] Re: oVirt alternatives

2022-02-05 Thread wk

Has anyone looked at OpenNebula?

https://opennebula.io/

Seems to be libvirt based.

-wk

On 2/5/22 5:03 AM, marcel d'heureuse wrote:

Moin,

We will take a look into proxmox.
Hyper v is also eol, if server server2022 is standard.


Br
Marcel

Am 5. Februar 2022 13:40:30 MEZ schrieb Thomas Hoberg 
:


There is unfortunately no formal announcement on the fate of oVirt, but 
with RHGS and RHV having a known end-of-life, oVirt may well shut down in Q2.

So it's time to hunt for an alternative for those of us to came to oVirt 
because they had already rejected vSAN or Nutanix.

Let's post what we find here in this thread.

Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/R4YFNNCTW5VVVRKSV2OORQ2UWZ2MTUDD/


___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIT3ZGEXCGTCNXHS7GQQ6EPB3XR5MHHE/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJPRMBKXSQZGFA3UAY3E52WE5UIGDT2I/


[ovirt-users] Re: Glusterfs and vm's

2021-05-08 Thread WK


Sent from my iPad

> On May 7, 2021, at 3:06 PM, eev...@digitaldatatechs.com wrote:
> 
> This helps RHEL and CentOS machines utilize glusterfs and actually speeds teh 
> vm up.
> I hope this will help someone. If you want the URL for the article, just ask. 

I (and others) would appreciate the URL.

-wk
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NEJSUCZLHI5FQ6MBGZ6LECHCQP55FQW/


[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread WK


On 12/8/2020 12:20 PM, Michael Watters wrote:

This was one of my fears regarding the IBM acquisition.  I guess we
can't complain too much, it's not like anybody *pays* for CentOS.  :)


yes, but "we" do provide feedback and bug reports from a LOT of 
different environments which directly helps RHEL. That is not an 
insignificant benefit to IBM.


I'm sure IBM will pick up a few paid RHEL licenses with this move, but 
I'm not sure the amount will be material enough to show up on the income 
statement. Experienced admins can easily adapt to Debian/Ubuntu/Suse etc.


In contrast, they lose the projects who started off with CentOS but 
switched to RHEL paid support when they had special needs or the 
production environment dictated that they have a 'real' license with 
Support. We have a few customers who did precisely that.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXILONFZDZO7IISBGMEXHPA5HSU4BQZ4/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread WK

No, that doesn't look right.

I have a testbed cluster that has a single 1G network (1500 mtu)

it is replica 2 + arbiter on top of 7200 rpms spinning drives formatted 
with XFS


This cluster runs Gluster 6.10 on Ubuntu 18 on some Dell i5-2xxx boxes 
that were lying around.


it uses a stock 'virt' group tuning which provides the following:

root@onetest2:~/datastores/101# cat /var/lib/glusterd/groups/virt
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
cluster.server-quorum-type=server
cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular
cluster.shd-max-threads=8
cluster.shd-wait-qlength=1
features.shard=on
user.cifs=off
cluster.choose-local=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on

I show the following results on your test. Note: the cluster is actually 
doing some work with 3 Vms running doing monitoring things.


The bare metal performance is as follows:

root@onetest2:/# dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.0783 s, 96.9 MB/s
root@onetest2:/# dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.5047 s, 93.3 MB/s

Moving over to the Gluster mount I show the following:

root@onetest2:~/datastores/101# dd if=/dev/zero of=/test12.img bs=1G 
count=1 oflag=dsync

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.4582 s, 93.7 MB/s
root@onetest2:~/datastores/101# dd if=/dev/zero of=/test12.img bs=1G 
count=1 oflag=dsync

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2034 s, 88.0 MB/s

So a little performance hit with Gluster but almost insignificant given 
that other things were going on.


I don't know if you are in a VM environment but if so you could try the 
virt tuning.


gluster volume set VOLUME group virt

Unfortunately, I know little about ZFS so I can't comment on its 
performance, but your gluster results should be closer to the bare metal 
performance.


Also note I am using an Arbiter, so that is less work than Replica 3. 
With a true Replica 3 I would expect the Gluster results to be lower, 
maybe as low as  60-70 MB/s range


-wk


On 11/25/2020 2:29 AM, Harry O wrote:

Unfortunately I didn't get any improvement by upgrading the network.

Bare metal (zfs raid1 zvol):
dd if=/dev/zero of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6471 s, 68.6 MB/s

Centos VM on gluster volume:
dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 36.8618 s, 29.1 MB/s

Does this performance look normal?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZKRIMXDVN3MAVE7GVQDUIL5ZE473LAL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J27OKT7IRWZM6DA4QEX3YZISDZOFHNAX/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-23 Thread WK


On 11/23/2020 5:56 AM, Harry O wrote:

Hi,
Can anyone help me with the performance on my 3 node gluster on zfs (it is 
setup with one arbiter)
The performance on the single vm I have on it (with engine) is 50% worse then a 
single bare metal disk, on the writes.
I have enabled "Optimize for virt store"
I run 1Gbps 1500MTU network, could this be the write performance killer?



it usually is.

Remember that the data has to be written to both the local and the other 
nodes (though in the case of the arbiter its just the metadata).


So 1Gb/s is going to be slower than local SATA speed.

This is not a Gluster issue. You will find it with all distributed File 
Systems.




Is this to be expected from a 2xHDD zfs raid one on each node, with 3xNode 
arbiter setup?
Maybe I should move to raid 5 or 6?
Maybe I should add SSD cache to raid1 zfs zpools?
What are your thoughts? What to do for optimize this setup?
I would like to run zfs with gluster and I can deal with a little performance 
loss, but not that much.


You don't mention numbers, so we don't know your defination of a 
"little" loss. There IS tuning that can be done in gluster, but the 1G 
network is going to be the bottleneck in your current setup.


Consider, adding ethernet cards/ports and using bonding (or teamd).

I am a fan of teamd which is provided with Redhat and Ubuntu distros. 
Its very easy to setup and manage. As a bonus you get some high 
availability as a bonus.


https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming

You *will* see an immediate improvement.

MTU 9000 (jumbo frames) can also help a bit.

Of course 10G or better networking would be optimal.

-wk




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SYP4I4MQDKLCIFMUSXVYCUOFNC25LNDR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIGTF6NSKQFJ77H47S6EWMSFZ5JBZIM3/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-08 Thread WK
are you using JBOD bricks or do you have some sort of RAID for each of 
the bricks?


Are you using sharding?

-wk

On 10/8/2020 6:11 AM, Jarosław Prokopowski wrote:

Hi Jayme, there is UPS but anyway the outages happened. We have also Raritan 
KVM but it is not supported by oVirt.
The setup is 6 hosts - Tow pairs of 3 hosts each using one replica 3 volume.
BTW what would be the best gluster volume solution for 6+ hosts?
  
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUPSDDU3665CP2NOBVPISX53KYOM7UDN/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IB77WNZDSHK5OLF2ET4Y7ODEJ2NPE323/


[ovirt-users] Re: CEPH - Opinions and ROI

2020-10-02 Thread WK
Yes, we manage a number of Distributed Storage systems including 
MooseFS, Ceph, DRBD and of course Gluster (since 3.3). Each has a 
specific use.


For small customer-specific VM host clusters, which is the majority of 
what we do, Gluster is by far the safest and easiest to 
deploy/understand for the more junior members of the team. We have never 
lost a VM image on Gluster, which can't be said about the others 
(including CEPH but that disaster was years ago and somewhat 
self-inflicted). The point is that it hard to shoot yourself in the foot 
with Gluster.


The newer innovations on Gluster such as sharding and the arbiter node 
have allowed it be competitive on the performance/hassle factor.


Our Ceph cluster is on one of the few larger host installations we have 
and is mostly handled by a more senior tech who has lots of experience 
with it. He clearly loves it and doesn't understand why we aren't fans 
but it just seems to be overkill for the typical 3 host VM cluster. The 
rest of us worry about him getting hit by a bus.


For the record I really like MooseFS, but not for live VMs, we use it 
for archiving and it is the easiest to maintain as long as you are 
paranoid with the "master" server which provides the metadata index for 
the chunkserver nodes.


My hope for Gluster is that it is able to continue to improve with some 
of the new ideas such as the thin-arbiter and keep that 
performance/hassle ratio high.


My worry is that IBM/Redhat makes more money on Ceph consulting, than 
Gluster and thus contributes to the idea that Gluster is a deprecated 
technology.




On 10/1/2020 7:53 AM, Strahil Nikolov via Users wrote:

CEPH requires at least 4 nodes to be "good".
I know that Gluster is not the "favourite child" for most vendors, yet it is 
still optimal for HCI.

You can check 
https://www.ovirt.org/develop/release-management/features/storage/cinder-integration.html
 for cinder integration.

Best Regards,
Strahil Nikolov




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7HDAHN3MOJLU3TZD5JVJP3KQIGOJMQK/


[ovirt-users] Re: CVE-2018-3639 - Important - oVirt - Speculative Store Bypass

2018-05-23 Thread WK



On 5/23/2018 7:57 AM, Sandro Bonazzola wrote:



Please note that to fully mitigate this vulnerability, system 
administrators must apply both hardware “microcode” updates and 
software patches that enable new functionality.
At this time, microprocessor microcode will be delivered by the 
individual manufacturers.





Intel has been promising microcode updates since January when Spectre 
first appeared and yet except for the very newest CPUs we haven't seen 
anything and in the cases of older CPUs, I wonder if we are ever going 
to see anything even if Intel has is on their "roadmap"


Can someone shed some light on the vulnerability at this time given we 
have no microcode update, but all Kernel/Os updates applied, which 
supposedly handle the original Meltdown and some Spectre Variants.


1) Does the unpatched microcode exploit require "root" permissions?

2) Do the existing libvirt/qemu patches prevent a user "root" or 
"otherwise" in a VM from snooping on other VMs and/or the host?


Sincerely,

-wk

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread WK

Updated info:

https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf

Looks like Intel is now committing to support Sandy/Ivy Bridge.

No mention of Westmere or earlier as of yet  :-(


On 1/26/2018 10:13 AM, WK wrote:


That cpu  is X5690. That is Westmere class.   We have a number of 
those doing 'meatball'  application loads that don't need the latest 
greatest cpu.


I do not yet believe the Microcode fix for Westmere is out yet and it 
may never be.


Intel has, so far, promised fixes for Haswell or better (i.e. CPUs 
from the last 5 years) with a vague mention of other cpus on a 
'customer' need basis.


Westmere is circa 2010 and came out before Sandy/Ivy Bridge so we 
don't know when or if they will be fixed, but probably only after the 
Sandy/Ivy Bridges get theirs.


-wk




On 1/26/2018 1:50 AM, Gianluca Cecchi wrote:

Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for 
guests and hosts, as detailed in release notes:


I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) 
and one oVirt host to 4.1.9.


Now in General -> Software subtab of the host I see:

OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 693.17.1.el7.x86_64
Kernel Features: IBRS: 0, PTI: 1, IBPB: 0

Am I supposed to manually set any particular value?

If I run version 0.32 (updated yesterday) 
of spectre-meltdown-checker.sh I got this on my Dell M610 blade with


        Version: 6.4.0
        Release Date: 07/18/2013

[root@ov200 ~]# /home/g.cecchi/spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.32

Checking for vulnerabilities on current system
Kernel is Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 
UTC 2018 x86_64

CPU is Intel(R) Xeon(R) CPU           X5690  @ 3.47GHz

Hardware check
* Hardware support (CPU microcode) for mitigation techniques
  * Indirect Branch Restricted Speculation (IBRS)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates IBRS capability:  NO
  * Indirect Branch Prediction Barrier (IBPB)
    * PRED_CMD MSR is available:  NO
    * CPU indicates IBPB capability:  NO
  * Single Thread Indirect Branch Predictors (STIBP)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates STIBP capability:  NO
  * Enhanced IBRS (IBRS_ALL)
    * CPU indicates ARCH_CAPABILITIES MSR availability:  NO
    * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO
  * CPU explicitly indicates not being vulnerable to Meltdown 
(RDCL_NO):  NO

* CPU vulnerability to the three speculative execution attacks variants
  * Vulnerable to Variant 1:  YES
  * Vulnerable to Variant 2:  YES
  * Vulnerable to Variant 3:  YES

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel:  YES
> STATUS:  NOT VULNERABLE  (107 opcodes found, which is >= 70, 
heuristic to be improved when official patches become available)


CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
  * Kernel is compiled with IBRS/IBPB support:  YES
  * Currently enabled features
    * IBRS enabled for Kernel space:  NO  (echo 1 > 
/sys/kernel/debug/x86/ibrs_enabled)
    * IBRS enabled for User space:  NO  (echo 2 > 
/sys/kernel/debug/x86/ibrs_enabled)

    * IBPB enabled:  NO  (echo 1 > /sys/kernel/debug/x86/ibpb_enabled)
* Mitigation 2
  * Kernel compiled with retpoline option:  NO
  * Kernel compiled with a retpoline-aware compiler: NO
  * Retpoline enabled:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with 
retpoline are needed to mitigate the vulnerability)


CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI):  YES
* PTI enabled and active:  YES
* Running as a Xen PV DomU:  NO
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

A false sense of security is worse than no security at all, see 
--disclaimer

[root@ov200 ~]#

So it seems I'm still vulnerable only to Variant 2, but kernel seems ok:

  * Kernel is compiled with IBRS/IBPB support: YES

while bios not, correct?

Is RH EL / CentOS expected to follow the retpoline option too, to 
mitigate Variant 2, as done by Fedora for example?


Eg on my just updated Fedora 27 laptop I get now:

[g.cecchi@ope46 spectre_meltdown]$ sudo ./spectre-meltdown-checker.sh
[sudo] password for g.cecchi:
Spectre and Meltdown mitigation detection tool v0.32

Checking for vulnerabilities on current system
Kernel is Linux 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 
UTC 2018 x86_64

CPU is Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz

Hardware check
* Hardware support (CPU microcode) for mitigation techniques
  * Indirect Branch Restricted Speculation (IBRS)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates IBRS capability:  NO
  * Indirect Branch Prediction Barrier (IB

Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread WK
That cpu  is X5690. That is Westmere class.   We have a number of those 
doing 'meatball'  application loads that don't need the latest greatest cpu.


I do not yet believe the Microcode fix for Westmere is out yet and it 
may never be.


Intel has, so far, promised fixes for Haswell or better (i.e. CPUs from 
the last 5 years) with a vague mention of other cpus on a 'customer' 
need basis.


Westmere is circa 2010 and came out before Sandy/Ivy Bridge so we don't 
know when or if they will be fixed, but probably only after the 
Sandy/Ivy Bridges get theirs.


-wk




On 1/26/2018 1:50 AM, Gianluca Cecchi wrote:

Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for 
guests and hosts, as detailed in release notes:


I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) and 
one oVirt host to 4.1.9.


Now in General -> Software subtab of the host I see:

OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 693.17.1.el7.x86_64
Kernel Features: IBRS: 0, PTI: 1, IBPB: 0

Am I supposed to manually set any particular value?

If I run version 0.32 (updated yesterday) 
of spectre-meltdown-checker.sh I got this on my Dell M610 blade with


        Version: 6.4.0
        Release Date: 07/18/2013

[root@ov200 ~]# /home/g.cecchi/spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.32

Checking for vulnerabilities on current system
Kernel is Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 
UTC 2018 x86_64

CPU is Intel(R) Xeon(R) CPU           X5690  @ 3.47GHz

Hardware check
* Hardware support (CPU microcode) for mitigation techniques
  * Indirect Branch Restricted Speculation (IBRS)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates IBRS capability:  NO
  * Indirect Branch Prediction Barrier (IBPB)
    * PRED_CMD MSR is available:  NO
    * CPU indicates IBPB capability:  NO
  * Single Thread Indirect Branch Predictors (STIBP)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates STIBP capability:  NO
  * Enhanced IBRS (IBRS_ALL)
    * CPU indicates ARCH_CAPABILITIES MSR availability: NO
    * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO
  * CPU explicitly indicates not being vulnerable to Meltdown 
(RDCL_NO):  NO

* CPU vulnerability to the three speculative execution attacks variants
  * Vulnerable to Variant 1:  YES
  * Vulnerable to Variant 2:  YES
  * Vulnerable to Variant 3:  YES

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel:  YES
> STATUS:  NOT VULNERABLE  (107 opcodes found, which is >= 70, 
heuristic to be improved when official patches become available)


CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
  * Kernel is compiled with IBRS/IBPB support:  YES
  * Currently enabled features
    * IBRS enabled for Kernel space:  NO  (echo 1 > 
/sys/kernel/debug/x86/ibrs_enabled)
    * IBRS enabled for User space:  NO  (echo 2 > 
/sys/kernel/debug/x86/ibrs_enabled)

    * IBPB enabled:  NO  (echo 1 > /sys/kernel/debug/x86/ibpb_enabled)
* Mitigation 2
  * Kernel compiled with retpoline option:  NO
  * Kernel compiled with a retpoline-aware compiler:  NO
  * Retpoline enabled:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with 
retpoline are needed to mitigate the vulnerability)


CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI):  YES
* PTI enabled and active:  YES
* Running as a Xen PV DomU:  NO
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

A false sense of security is worse than no security at all, see 
--disclaimer

[root@ov200 ~]#

So it seems I'm still vulnerable only to Variant 2, but kernel seems ok:

  * Kernel is compiled with IBRS/IBPB support:  YES

while bios not, correct?

Is RH EL / CentOS expected to follow the retpoline option too, to 
mitigate Variant 2, as done by Fedora for example?


Eg on my just updated Fedora 27 laptop I get now:

[g.cecchi@ope46 spectre_meltdown]$ sudo ./spectre-meltdown-checker.sh
[sudo] password for g.cecchi:
Spectre and Meltdown mitigation detection tool v0.32

Checking for vulnerabilities on current system
Kernel is Linux 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 
2018 x86_64

CPU is Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz

Hardware check
* Hardware support (CPU microcode) for mitigation techniques
  * Indirect Branch Restricted Speculation (IBRS)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates IBRS capability:  NO
  * Indirect Branch Prediction Barrier (IBPB)
    * PRED_CMD MSR is available:  NO
    * CPU indicates IBPB capability:  NO
  * Single Thread Indirect Branch Predictors (STIBP)
    * SPEC_CTRL MSR is available:  NO
    * CPU indicates STIBP capability:  NO
  * Enhanced IBRS (IBRS_ALL)
    * CPU indicates ARC

Re: [ovirt-users] Rebuilding my infra..

2018-01-08 Thread WK



On 1/8/2018 12:38 PM, Johan Bernhardsson wrote:


You can't start the hosted engine storage bin anything less than 
replica 3 without changing the installer scripts manually.


For the third it can be pretty much anything capable of running as an 
arbiter.





+1 the arb box can be an old SFP core2duo with a 40-60 GB SSD for most 
cases. The formula is something like 4k of arbiter space for every 
"file".  For 10-80 VM disk images that would be really minimal.


-wk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-09-01 Thread WK



On 9/1/2017 8:53 AM, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then?  I was 
misinformed when I created this setup.  I thought the arbitrator held 
enough metadata that it could validate or refudiate  any one replica 
(kinda like the parity drive for a RAID-4 array).  I was also under 
the impression that one replica  + Arbitrator is enough to keep the 
array online and functional.


I can not speak for the Ovirt implementation of Rep2+Arbiter as I've not 
used it, but on a standalone libvirt VM host cluster,   Arb does exactly 
what you want. You can lose 'one' of the two replicas and stay online. 
The Arb maintains quorum. Of course if you lose the second Replica 
before you have repaired the first failure you have completely lost your 
data as the Arb doesn't have that. So Rep2+Arb is not as SAFE as Rep3, 
however it can be faster, especially on less than 10G networks.


When any node fails, Gluster will pause for 42 seconds or so (its 
configurable) before marking the bad node as bad. Then normal activity 
will resume.


On most people's systems, the 'pause' (I think its a read-only event), 
it noticeable, but not enough to cause issue. One person has reported 
that his VMs went read-only during that period, but other have not 
reported that.


-wk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ceph

2014-05-03 Thread WK

On 5/1/14, 7:27 AM, Jeremiah Jahn wrote:

Personally, I'd love to use it. I haven't used it because it hasn't
been part of RedHat's enterprise platform, and didn't want to have to
track updates separately, so we went with gluster for all of our
storage needs as opposed to a mix of both gluster and ceph where
appropriate.





What are the oVirt situations where Gluster works better and conversely, 
what are the uses where Ceph would work better ?


-wk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users