Den fre 28 aug. 2020 kl 11:47 skrev Khodayar Doustar <[email protected]>:

> I've actually destroyed the cluster and a new one installed.
> I've just changed the installation method and version. I've used
> ceph-ansible this time and installed Nautilus.
> The cluster worked fine with the same hardware.
> Yes Janne, you are right that it had very small disks (9X20GB disks, 3 for
> each node) but there was no problem with Nautilus.
>
>
I just don't think anyone finds it useful to spend time figuring out the
lowest possible limits for each release, for each type of OSD storage.

So 10G worked at some point, 20 at another but if you are serious about
ceph, make the disks thin provisioned and larger by
a huge margin (say 100G) and let it grow to the size it needs for your
tests. If the limit is 22 or 23.7 won't matter then and your test will run
through.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to