I’ve come close more than once to removing that misleading 4% guidance.
The OP plans to use a single M.2 NVMe device - I’m a bit suspcious that the M.2
connector may only be SATA, and 12 OSDs sharing one SATA device for WAL+DB,
plus potential CephFS metadata and RGW index pools seems like a
des do you have for your cluster size? Do you have dedicated
or shared MDS with OSDs?
> I don't know if this is optimum, we are in testing process...
>
> - Mail original -
> > De: "Stefan Kooman"
> > À: "Jake Grimmett" , "Christian Wuerdig" &
Thank Jake
On Mon, Jun 20, 2022 at 10:47 AM Jake Grimmett
wrote:
> Hi Stefan
>
> We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
> (large-ish image files) it works well.
>
> We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
> 240GB System disk. Four
; , "Satish Patel"
>
> Cc: "ceph-users"
> Envoyé: Lundi 20 Juin 2022 16:59:58
> Objet: [ceph-users] Re: Suggestion to build ceph storage
> On 6/20/22 16:47, Jake Grimmett wrote:
>> Hi Stefan
>>
>> We use cephfs for our 7200CPU/224GPU HPC cluster, f
Hi Stefan
We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
(large-ish image files) it works well.
We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
240GB System disk. Four dedicated nodes have NVMe for metadata pool, and
provide mon,mgr and MDS
On Sun, 19 Jun 2022 at 02:29, Satish Patel wrote:
> Greeting folks,
>
> We are planning to build Ceph storage for mostly cephFS for HPC workload
> and in future we are planning to expand to S3 style but that is yet to be
> decided. Because we need mass storage, we bought the following HW.
>
> 15