Hello guys!
We noticed an unexpected situation. In a recently deployed Ceph cluster we
are seeing a raw usage, that is a bit odd. We have the following setup:
We have a new cluster with 5 nodes with the following setup:
- 128 GB of RAM
- 2 cpus Intel(R) Intel Xeon Silver 4210R
- 1 NVME of 2 TB for the rocks DB caching
- 5 HDDs of 14TB
- 1 NIC dual port of 25GiB in BOND mode.
Right after deploying the Ceph cluster, we see a raw usage of about 9TiB.
However, no load has been applied onto the cluster. Have you guys seen such
a situation? Or, can you guys help understand it?
We are using Ceph Octopus, and we have set the following configurations:
```
ceph_conf_overrides:
global:
osd pool default size: 3
osd pool default min size: 1
osd pool default pg autoscale mode: "warn"
perf: true
rocksdb perf: true
mon:
mon osd down out interval: 120
osd:
bluestore min alloc size hdd: 65536
```
Any tip or help on how to explain this situation is welcome!
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]