Hi,

we have OSD nodes currently consisting of two 605GB SSDs and six 18TB
HDDs. The hosts have room for twelve HDDs.

We created a drivegroup spec that looks like this:

spec:
  block_db_size: 100GB
  data_devices:
    rotational: true
    size: '18TB:'
  db_devices:
    rotational: false
    size: '550GB:650GB'
  db_slots: 6

Initially this creates 6 OSDs with their RocksDB+WAL on the SSDs,
3 each which is nice for load balancing.

But when we add another HDD it gets a 17.9TB data volume and a 100GB DB
volume, both on the HDD:

sdm                                                                             
                        8:192  0   18T  0 disk
├─ceph--846e1a59--aff6--4ef8--9b71--de7241531677-osd--block--026e8cef--123d--47d9--9b30--211f94edf96c
 252:16   0 17.9T  0 lvm
└─ceph--846e1a59--aff6--4ef8--9b71--de7241531677-osd--db--88c47d0b--f5c6--4cec--8909--c5f8036ca459
    252:17   0  100G  0 lvm


I would have assumed that the remaining 305GB on the SSDs would be used.

How do we achieve this?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to