Hi

We have 20 ceph node, each with 12 x 18Tb, 2 x nvme 1Tb

I try this method to create osd

ceph orch apply -i osd_spec.yaml

with this conf

osd_spec.yaml
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
db_devices:
  paths:
    - /dev/nvme0n1
    - /dev/nvme1n1

this created 6 osd with wal/db on /dev/nvme0n1 and 6 on /dev/nvme1n1 per node

but when I do a lvs, I see only 6 x 80Go partitions on each nvme...

I think this is  dynamic sizing, but I'm not sure, I don't know how to check 
it...

Our cluster will only host couple of files, a small one and a big one ~2GB for 
cephfs only use, and with only 8 users accessing datas

I don't know if this is optimum, we are in testing process...

----- Mail original ----- 
> De: "Stefan Kooman" <ste...@bit.nl>
> À: "Jake Grimmett" <j...@mrc-lmb.cam.ac.uk>, "Christian Wuerdig" 
> <christian.wuer...@gmail.com>, "Satish Patel"
> <satish....@gmail.com>
> Cc: "ceph-users" <ceph-users@ceph.io>
> Envoyé: Lundi 20 Juin 2022 16:59:58
> Objet: [ceph-users] Re: Suggestion to build ceph storage

> On 6/20/22 16:47, Jake Grimmett wrote:
>> Hi Stefan
>> 
>> We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
>> (large-ish image files) it works well.
>> 
>> We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
>> 240GB System disk. Four dedicated nodes have NVMe for metadata pool, and
>> provide mon,mgr and MDS service.
>> 
>> I'm not sure you need 4% of OSD for wal/db, search this mailing list
>> archive for a definitive answer, but my personal notes are as follows:
>> 
>> "If you expect lots of small files: go for a DB that's > ~300 GB
>> For mostly large files you are probably fine with a 60 GB DB.
>> 266 GB is the same as 60 GB, due to the way the cache multiplies at each
>> level, spills over during compaction."
> 
> There is (experimental ...) support for dynamic sizing in Pacific [1].
> Not sure if it's stable yet in Quincy.
> 
> Gr. Stefan
> 
> [1]:
> https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#sizing
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

-- 
Christophe BAILLON
Mobile :: +336 16 400 522
Work :: https://eyona.com
Twitter :: https://twitter.com/ctof
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to