Hi all,

Thank you for all your answers.


I give yoy a little more details about the why i am here 😊


I want to learn ceph, and i am really having no knowledge on ceph.

I decided to follow the official doc on ceph.com site.
It is said to use ceph-deploy which is not working.
I tried to upgraded it following ceph documentation and then i was in this
ceph repos story and ceph compilation!

 I have log a bug for ceph-deploy. With no working answer.

For your info
  Compilation is still on going
  I am trying qemu emulation to check is it is faster

I ll try your proposals on tomorrow

Thanks


Le sam. 23 févr. 2019 à 09:41, Patrick Charles François Ernzer <
[email protected]> a écrit :

> Hello,
>
> > I am trying to build a ceph cluster based on 4 raspberry3B+.
>
> As long as you have very realistic expectations as to the performance and
> reliability you will get out of 4 severely underpowered (for Ceph) nodes,
> why not.
>
> I do nearly the same, simply to learn Ceph, with 5 ODROID-HC2 (more bang
> than the 3B+ but 32 bit). I would not dream of expecting even wire speed
> out of SBCs with 2GiB RAM, a single Gigabit network connection attached via
> USB and SATA via USB.
>
> > Following the official CEPH documentation is a dead end as packages are
> only available for
> > readhat/centos and not fedora !
>
> As was pointed out, your Raspis should find the packages in the repos.
> Please provide the output of the yum commands that Troy Dawson mailed.
> Maybe your repository setup has an issue.
>
> You can view all builds of ceph for Fedora at
> https://koji.fedoraproject.org/koji/search?match=glob&type=package&terms=ceph
> It's built for
> - aarch64
> - ppc64le
> - s390x
> - x86_64
>
> > I tried to used the el7 repository but the are conflicts
> > and lacks with fedora repos.
>
> Yeah, I would not attempt to mix that way.
>
> > Did I miss something ?
>
> If "dnf search ceph" shows you results, then you might be trying to
> install Ceph wrongly. Are you using ceph-ansible? I definitely recommend
> you do.
>   http://docs.ceph.com/ceph-ansible/stable-3.2/
> or, if not using Luminous, but master
>   http://docs.ceph.com/ceph-ansible/master/
> although I recommend you start with a stable version if this is your first
> foray into Ceph.
>
> > I am now trying to compile from sources. it is still ongoing ( 16% after
> 24h !)
>
> Yeah, that will take a while. I'd be too impatient for that ;-)
>
> On Ceph itself, I am happily playing with Ceph Luminous using Bluestore,
> if you want Luminous too, be sure to use the stable-3.2 branch of
> ceph-ansible, as documented.
>
> As my SBC definitely are at the lowest end of
> http://docs.ceph.com/docs/master/start/hardware-recommendations/ I
> adjusted
> osd_memory_target
> http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
>
> I expect to have to do many more tunings in the days and weeks to come.
>
> As always with a cluster, you may want to consider:
> - using monitoring to notice if one of many nodes is down
> - using a watchdog to bounce nodes that are unresponsive
> - wiring up serial consoles and logging to a logserver
>
> pcfe
> _______________________________________________
> arm mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/[email protected]
>
_______________________________________________
arm mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]

Reply via email to