Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov <
[email protected]> ha scritto:

> Most probably it will be easier if you stick with full-blown distro.
>
> @Sandro Bonazzola can help with CEPH status.
>

Letting the storage team have a voice here :-)
+Tal Nisan <[email protected]> , +Eyal Shenitzky <[email protected]> , +Nir
Soffer <[email protected]>


>
> Best Regards,Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore <
> [email protected]> написа:
>
>
>
>
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <[email protected]>
> wrote:
> > В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
> >> Hi Strahil,
> >>
> >> Thanks for your reply, I have 16 nodes for now but more on the way.
> >>
> >> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> >>
> >> 1. I have more experience with Ceph than Gluster.
> > That is a good reason to pick CEPH.
> >> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks.
> >> 3. Adding Gluster storage limits to 3 hosts at a time.
> > Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
> >> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
> > Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
> >> In my initial testing I was able to enable Centos repositories in Node
> Ng but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> >> Does Ceph hyperconverge still make sense?
> > Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
> >
> >> Regards
> >> Shantur
> >>
> >> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> [email protected]> wrote:
> >>> Hi Shantur,
> >>>
> >>> the main question is how many nodes you have.
> >>> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
> >>>
> >>>
> >>> There are users reporting using CEPH with their oVirt , but I can't
> tell how good it is.
> >>> I doubt that oVirt nodes come with CEPH components , so you most
> probably will need to use a full-blown distro. In general, using extra
> software on oVirt nodes is quite hard .
> >>>
> >>> With such setup, you will need much more nodes than a Gluster setup
> due to CEPH's requirements.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> [email protected]> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Hi all,
> >>>
> >>> I am planning my new oVirt cluster on Apple hosts. These hosts can
> only have one disk which I plan to partition and use for hyper converged
> setup. As this is my first oVirt cluster I need help in understanding few
> bits.
> >>>
> >>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> >>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> >>> 3. Can I install cinderlib on oVirt Node Next hosts?
> >>> 4. Are there any pit falls in such a setup?
> >>>
> >>>
> >>> Thanks for your help
> >>>
> >>> Regards,
> >>> Shantur
> >>>
> >>> _______________________________________________
> >>> Users mailing list -- [email protected]
> >>> To unsubscribe send an email to [email protected]
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> >>> _______________________________________________
> >>> Users mailing list -- [email protected]
> >>> To unsubscribe send an email to [email protected]
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> >>>
> >>
> > _______________________________________________
> > Users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> >
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA <https://www.redhat.com/>

[email protected]
<https://www.redhat.com/>

*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/EIDRCZNFEXCM7ZJXAMULB5IY6W77AYCX/

Reply via email to