Le 24/01/2024 à 10:33:45+0100, Robert Sander a écrit
Hi,
>
> On 1/24/24 10:08, Albert Shih wrote:
>
> > 99.99% because I'm newbie with ceph and don't understand clearly how
> > the autorisation work with cephfs ;-)
>
> I strongly recommend you to ask for a expierenced Ceph consultant that
Hi,
On 1/24/24 10:08, Albert Shih wrote:
99.99% because I'm newbie with ceph and don't understand clearly how
the autorisation work with cephfs ;-)
I strongly recommend you to ask for a expierenced Ceph consultant that
helps you design and setup your storage cluster.
It looks like you try
Le 24/01/2024 à 10:23:20+0100, David C. a écrit
Hi,
>
> In this scenario, it is more consistent to work with subvolumes.
Ok. I will do that.
>
> Regarding security, you can use namespaces to isolate access at the OSD level.
HumI'm currently have no idea what you just say but that's
Hi Albert,
In this scenario, it is more consistent to work with subvolumes.
Regarding security, you can use namespaces to isolate access at the OSD
level.
What Robert emphasizes is that creating pools dynamically is not without
effect on the number of PGs and (therefore) on the architecture (PG
Le 24/01/2024 à 09:45:56+0100, Robert Sander a écrit
Hi
>
> On 1/24/24 09:40, Albert Shih wrote:
>
> > Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
> > cephfs (currently and that number will increase with time).
>
> Why do you need 20 - 30 separate CephFS
Hi,
On 1/24/24 09:40, Albert Shih wrote:
Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
cephfs (currently and that number will increase with time).
Why do you need 20 - 30 separate CephFS instances?
and put all my cephfs inside two of them. Or should I create