[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Strahil Nikolov via Users
Due to POSIX compliance, oVirt needs 512 byte physical sector size. If your 
SSD/NVME has the new standard (4096) you will need to use VDO with 
'--emulate512' flag (or whatever it was named). Yet, if you already got 512 
physical sector size - you can skip VDO totally.
About the mount options of the bricks, you can use noatime & inode64
Also if you use SELINUX use the 'context=' mount option to tell the kernel 
"skip looking for SELINUX Label, it's always..."
Also, consider setting the SSD I/O scheduler to none (multique should be 
enabled on EL8 Hypervisors by default) which will reduce reordering of your I/O 
requests and speed up on fast storage. NVMEs by default use that.

Best Regards,Strahil Nikolov
 
 
  On Mon, Apr 26, 2021 at 22:43, penguin pages wrote:   

"...Tuning Gluster with VDO bellow is quite difficult and the overhead of using 
VDO could
reduce performance " Yup. hense creation of a dedicated data00 volume from 
the 1TB SSD each server had.  Matched options listed in oVirt.. but still OCP 
would not address the drive as target for deployment. That is when I opened 
ticket with RH and they noted Gluster is not a supported target for OCP. Hense 
then off to check if we could do CEPH HCI.. nope. 

"..I would try with VDO compression and dedup disabled.If your SSD has 512 byte 
physical..& logical size, you can skip VDO at all to check performance."  
Yes.. VDO removed was/ is next test.  But your note about 512 is yes.. Are 
their tuning parameters for Gluster with this?


"...Also FS mount options are very important for XFS"  - What options do 
you use / recommend?  Do you have a link to said tuning manual page where I 
could review and knowing the base HCI volume is  VDO + XFS + Gluster.  But 
second volume for OCP will be just  XFS + Gluster I would assume this may 
change recommendations.

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHRUZV6JS2F25AN4FEPBFBIDGPKCLGQL/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z22N3SJJPXX5X4FRYK6BJJ4UQUBQGUSC/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread penguin pages


"...Tuning Gluster with VDO bellow is quite difficult and the overhead of using 
VDO could
reduce performance " Yup. hense creation of a dedicated data00 volume from 
the 1TB SSD each server had.  Matched options listed in oVirt.. but still OCP 
would not address the drive as target for deployment. That is when I opened 
ticket with RH and they noted Gluster is not a supported target for OCP. Hense 
then off to check if we could do CEPH HCI.. nope. 

"..I would try with VDO compression and dedup disabled.If your SSD has 512 byte 
physical..& logical size, you can skip VDO at all to check performance."  
Yes.. VDO removed was/ is next test.  But your note about 512 is yes.. Are 
their tuning parameters for Gluster with this?


"...Also FS mount options are very important for XFS"  - What options do 
you use / recommend?  Do you have a link to said tuning manual page where I 
could review and knowing the base HCI volume is  VDO + XFS + Gluster.   But 
second volume for OCP will be just  XFS + Gluster I would assume this may 
change recommendations.

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHRUZV6JS2F25AN4FEPBFBIDGPKCLGQL/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Strahil Nikolov via Users
Tuning Gluster with VDO bellow is quite difficult and the overhead of using VDO 
could reduce performance .
I would try with VDO compression and dedup disabled.If your SSD has 512 byte 
physical & logical size, you can skip VDO at all to check performance.
Also FS mount options are very important for XFS.
Best Regards,Strahil Nikolov
 
 
  On Mon, Apr 26, 2021 at 20:50, penguin pages wrote:   
The response was that when I select the oVirt HCI storage volumes to deploy to 
(with VDO enabled) which are a single 512GB SSD with only one small IDM VM 
running.  The IPI OCP 4.7 deployment fails.  RH closed ticket because "gluster 
volume is too slow".

I then tried to create with my other 1TB SSD in each server gluster volume 
without VDO and see if that worked.. but though I matched all settings / 
gluster options oVirt set, IPI OCP would not show disk as deployable option.  I 
then figured I would use GUI to create bricks vs ansible means (trying to be 
good and stop doing direct shell but build as ansible playbooks).. but that 
test is on hold because servers for next two weeks are being re-tasked for 
another POC.  So I figured I would do some rethinking if oVirt HCI on RHEV 4.5 
with Gluster is a rat hole that will never work.  

Below is the output from fio test OCP team asked me to run to show gluster was 
too slow.
##
ansible@LT-A0070501:/mnt/c/GitHub/penguinpages_cluster_devops/cluster_devops$ 
ssh core@172.16.100.184
[core@localhost ~]$ journalctl -b -f -u release-image.service -u 
bootkube.service
-- Logs begin at Sun 2021-04-11 19:18:07 UTC. --
Apr 13 11:50:23 localhost bootkube.sh[1276476]: [#404] failed to fetch 
discovery: Get "https://localhost:6443/api?timeout=32s": x509: certificate has 
expired or is not yet valid: current time 2021-04-13T11:50:23Z is after 
2021-04-12T19:12:30Z
Apr 13 11:50:23 localhost bootkube.sh[1276476]: [#405] failed to fetch 
discovery: Get "https://localhost:6443/api?timeout=32s": x509: certificate has 
expired or is not yet valid: current time 2021-04-13T11:50:23Z is after 
2021-04-12T19:12:30Z

[core@localhost ~]$ su -
Password: 
su: Authentication failure
[core@localhost ~]$ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z 
quay.io/openshift-scale/etcd-perf Trying to pull 
quay.io/openshift-scale/etcd-perf
Trying to pull quay.io/openshift-scale/etcd-perf...
Getting image source signatures
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob fcc022b71ae4 done
Copying blob a93d706457b7 done
Copying blob 763b3f36c462 done
Writing manifest to image destination
Storing signatures
 Running fio 
---

{
  "fio version" : "fio-3.7",
  "timestamp" : 1618315279,
  "timestamp_ms" : 1618315279798,
  "time" : "Tue Apr 13 12:01:19 2021",
  "global options" : {
    "rw" : "write",
    "ioengine" : "sync",
    "fdatasync" : "1",
    "directory" : "/var/lib/etcd",
    "size" : "22m",
    "bs" : "2300"
  },
  "jobs" : [
    {
      "jobname" : "etcd_perf",
      "groupid" : 0,
      "error" : 0,
      "eta" : 0,
      "elapsed" : 507,
      "job options" : {
        "name" : "etcd_perf"
      },
      "read" : {
        "io_bytes" : 0,
        "io_kbytes" : 0,
        "bw_bytes" : 0,
        "bw" : 0,
        "iops" : 0.00,
        "runtime" : 0,
        "total_ios" : 0,
        "short_ios" : 10029,
        "drop_ios" : 0,
        "slat_ns" : {
          "min" : 0,
          "max" : 0,
          "mean" : 0.00,
          "stddev" : 0.00
        },
        "clat_ns" : {
          "min" : 0,
          "max" : 0,
          "mean" : 0.00,
          "stddev" : 0.00,
          "percentile" : {
            "1.00" : 0,
            "5.00" : 0,
            "10.00" : 0,
            "20.00" : 0,
            "30.00" : 0,
            "40.00" : 0,
            "50.00" : 0,
            "60.00" : 0,
            "70.00" : 0,
            "80.00" : 0,
            "90.00" : 0,
            "95.00" : 0,
            "99.00" : 0,
            "99.50" : 0,
            "99.90" : 0,
            "99.95" : 0,
            "99.99" : 0
          }
        },
        "lat_ns" : {
          "min" : 0,
          "max" : 0,
          "mean" : 0.00,
          "stddev" : 0.00
        },
        "bw_min" : 0,
        "bw_max" : 0,
        "bw_agg" : 0.00,
        "bw_mean" : 0.00,
        "bw_dev" : 0.00,
        "bw_samples" : 0,
        "iops_min" : 0,
        "iops_max" : 0,
        "iops_mean" : 0.00,
        "iops_stddev" : 0.00,
        "iops_samples" : 0
      },
      "write" : {
        "io_bytes" : 23066700,
        "io_kbytes" : 22526,
        "bw_bytes" : 45589,
        "bw" : 44,
        "iops" : 19.821372,
        "runtime" : 505969,
        

[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread penguin pages

The response was that when I select the oVirt HCI storage volumes to deploy to 
(with VDO enabled) which are a single 512GB SSD with only one small IDM VM 
running.  The IPI OCP 4.7 deployment fails.  RH closed ticket because "gluster 
volume is too slow".

I then tried to create with my other 1TB SSD in each server gluster volume 
without VDO and see if that worked.. but though I matched all settings / 
gluster options oVirt set, IPI OCP would not show disk as deployable option.   
I then figured I would use GUI to create bricks vs ansible means (trying to be 
good and stop doing direct shell but build as ansible playbooks).. but that 
test is on hold because servers for next two weeks are being re-tasked for 
another POC.  So I figured I would do some rethinking if oVirt HCI on RHEV 4.5 
with Gluster is a rat hole that will never work.   

Below is the output from fio test OCP team asked me to run to show gluster was 
too slow.
##
ansible@LT-A0070501:/mnt/c/GitHub/penguinpages_cluster_devops/cluster_devops$ 
ssh core@172.16.100.184
[core@localhost ~]$ journalctl -b -f -u release-image.service -u 
bootkube.service
-- Logs begin at Sun 2021-04-11 19:18:07 UTC. --
Apr 13 11:50:23 localhost bootkube.sh[1276476]: [#404] failed to fetch 
discovery: Get "https://localhost:6443/api?timeout=32s": x509: certificate has 
expired or is not yet valid: current time 2021-04-13T11:50:23Z is after 
2021-04-12T19:12:30Z
Apr 13 11:50:23 localhost bootkube.sh[1276476]: [#405] failed to fetch 
discovery: Get "https://localhost:6443/api?timeout=32s": x509: certificate has 
expired or is not yet valid: current time 2021-04-13T11:50:23Z is after 
2021-04-12T19:12:30Z

[core@localhost ~]$ su -
Password: 
su: Authentication failure
[core@localhost ~]$ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z 
quay.io/openshift-scale/etcd-perf Trying to pull 
quay.io/openshift-scale/etcd-perf
Trying to pull quay.io/openshift-scale/etcd-perf...
Getting image source signatures
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob fcc022b71ae4 done
Copying blob a93d706457b7 done
Copying blob 763b3f36c462 done
Writing manifest to image destination
Storing signatures
 Running fio 
---

{
  "fio version" : "fio-3.7",
  "timestamp" : 1618315279,
  "timestamp_ms" : 1618315279798,
  "time" : "Tue Apr 13 12:01:19 2021",
  "global options" : {
"rw" : "write",
"ioengine" : "sync",
"fdatasync" : "1",
"directory" : "/var/lib/etcd",
"size" : "22m",
"bs" : "2300"
  },
  "jobs" : [
{
  "jobname" : "etcd_perf",
  "groupid" : 0,
  "error" : 0,
  "eta" : 0,
  "elapsed" : 507,
  "job options" : {
"name" : "etcd_perf"
  },
  "read" : {
"io_bytes" : 0,
"io_kbytes" : 0,
"bw_bytes" : 0,
"bw" : 0,
"iops" : 0.00,
"runtime" : 0,
"total_ios" : 0,
"short_ios" : 10029,
"drop_ios" : 0,
"slat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00
},
"clat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00,
  "percentile" : {
"1.00" : 0,
"5.00" : 0,
"10.00" : 0,
"20.00" : 0,
"30.00" : 0,
"40.00" : 0,
"50.00" : 0,
"60.00" : 0,
"70.00" : 0,
"80.00" : 0,
"90.00" : 0,
"95.00" : 0,
"99.00" : 0,
"99.50" : 0,
"99.90" : 0,
"99.95" : 0,
"99.99" : 0
  }
},
"lat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00
},
"bw_min" : 0,
"bw_max" : 0,
"bw_agg" : 0.00,
"bw_mean" : 0.00,
"bw_dev" : 0.00,
"bw_samples" : 0,
"iops_min" : 0,
"iops_max" : 0,
"iops_mean" : 0.00,
"iops_stddev" : 0.00,
"iops_samples" : 0
  },
  "write" : {
"io_bytes" : 23066700,
"io_kbytes" : 22526,
"bw_bytes" : 45589,
"bw" : 44,
"iops" : 19.821372,
"runtime" : 505969,
"total_ios" : 10029,
"short_ios" : 0,
"drop_ios" : 0,
"slat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00
},
"clat_ns" : {
  "min" : 12011,
  "max" : 680340,
  "mean" : 26360.617210,
  "stddev" : 15390.749240,
  "percentile" : {
"1.00" 

[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Strahil Nikolov via Users
I haven't seen your email on the gluster users' mailing list .
What was your problem with the performance ?
Best Regards,Strahil Nikolov
 
 
  On Mon, Apr 26, 2021 at 17:30, penguin pages wrote:   
It was on a support ticket / call I was having.  I googled around and the only 
article I found was the one about features being removed.. But not sure if this 
effects oVirt / HCI.

My ticket was about trying to deploy OCP on a full SSD cluster of three nodes 
and disk performance over 10Gb will too slow and RH support was " We don't 
support use of gluster for OCP.. and need you to move off gluster for CEPH.

So I opened another ticket about CEPH on HCI .. and was told "not supported.. 
CEPH nodes must be external"  So my three server small work office and demo 
stack, now is rethinking having to go to anther stack / vendor such as VMWare 
and vSAN, just because I can't get a stack that meets needs for small HCI stack 
with Linux.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4QWNUSG6QDJDAOBOPJWEYIBE4O5HTF7/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U2PQVODOVC2TIHNUPS6PUSZ2MF7T5W7V/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Gianluca Cecchi
On Mon, Apr 26, 2021 at 4:30 PM penguin pages 
wrote:

>
> It was on a support ticket / call I was having.  I googled around and the
> only article I found was the one about features being removed.. But not
> sure if this effects oVirt / HCI.
>
> My ticket was about trying to deploy OCP on a full SSD cluster of three
> nodes and disk performance over 10Gb will too slow and RH support was " We
> don't support use of gluster for OCP.. and need you to move off gluster for
> CEPH.
>
> So I opened another ticket about CEPH on HCI .. and was told "not
> supported.. CEPH nodes must be external"  So my three server small work
> office and demo stack, now is rethinking having to go to anther stack /
> vendor such as VMWare and vSAN, just because I can't get a stack that meets
> needs for small HCI stack with Linux.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https:/ 
>

Remaining in enterprise and products/solutions supported by Red Hat, here
there are two different use cases for Red Hat Hyperconverged
infrastructure, see:
https://access.redhat.com/products/red-hat-hyperconverged-infrastructure

1) Red Hat Hyperconverged Infrastructure for Cloud
that is for Openstack and in that case Ceph (RHCS) is the only storage
solution supported.

2) Red Hat Hyperconverged Infrastructure for Virtualization
that is for RHV and in that case Gluster (RHGS) is the only storage
solution supported

Then there is the use case of OCP where you want to use persistent storage,
and again the only supported solution is Ceph (RHCS).
See
https://docs.openshift.com/container-platform/4.7/storage/persistent_storage/persistent-storage-ocs.html
https://access.redhat.com/articles/4731161

HIH clarifying,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UT7SDCSFF2TJ27CYL73MMYQN5TDZUHGP/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread penguin pages

It was on a support ticket / call I was having.  I googled around and the only 
article I found was the one about features being removed.. But not sure if this 
effects oVirt / HCI.

My ticket was about trying to deploy OCP on a full SSD cluster of three nodes 
and disk performance over 10Gb will too slow and RH support was " We don't 
support use of gluster for OCP.. and need you to move off gluster for CEPH.

So I opened another ticket about CEPH on HCI .. and was told "not supported.. 
CEPH nodes must be external"  So my three server small work office and demo 
stack, now is rethinking having to go to anther stack / vendor such as VMWare 
and vSAN, just because I can't get a stack that meets needs for small HCI stack 
with Linux.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4QWNUSG6QDJDAOBOPJWEYIBE4O5HTF7/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Gianluca Cecchi
On Mon, Apr 26, 2021 at 2:34 PM penguin pages 
wrote:

>
> I have been building out HCI stack with KVM/RHEV + oVirt with the HCI
> deployment process.  This is very nice for small / remote site use cases,
> but with Gluster being anounced as EOL in 18 months, what is the
> replacement plan?
>

Are you referring to this:
https://access.redhat.com/support/policy/updates/rhhiv
?
If so, possibly in the meantime there will be a new release?
or what?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LF64UVVZFDLXOTRJ5KS3O3DLQUXPFD5J/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread Jayme
What do you mean Gluster being announced as EOL? Where did you find this
information?

On Mon, Apr 26, 2021 at 9:34 AM penguin pages 
wrote:

>
> I have been building out HCI stack with KVM/RHEV + oVirt with the HCI
> deployment process.  This is very nice for small / remote site use cases,
> but with Gluster being anounced as EOL in 18 months, what is the
> replacement plan?
>
> Are their working projects and plans to replace Gluster with CEPH?
> Are their deployment plans to get an HCI stack onto a supported file
> system?
>
> I liked gluster for the control plan for the oVirt engine and smaller
> utility VMs as each system has a full copy, I can retrieve /extract a copy
> of the VM without having all bricks back... it was just "easy" to use.
> CEPH just means more complexity.. and though it scales better and has
> better features, it means that repair means having critical mass of nodes
> up before you can extra data (vs any disk can be pulled out of a gluster
> node, plugged into my laptop and I can at least extract the data).
>
> I guess I am not trying to debate shifting to CEPH.. it does not matter..
> that ship sailed...  What I am asking is when / what are the plans for
> replacement of Gluster for HCI.  Because right now, for small sites for
> HCI, when Gluster is no longer supported.. and CEPH does not make it... is
> to go VMWare and vSAN or some other total different stack.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHR2GFVIVHQBYDF2SU4KUVH5RXFMVOE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K2UTOLJQQUPUORVDSXOTBOML52XVNWQY/