To add more information, in case that helps:
```
# ceph -s
cluster:
id: <Pool_UUID>
health: HEALTH_OK
....
task status:
data:
pools: 6 pools, 161 pgs
objects: 223 objects, 7.0 KiB
usage: 9.3 TiB used, 364 TiB / 373 TiB avail
pgs: 161 active+clean
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 373 TiB 364 TiB 9.3 TiB 9.3 TiB 2.50
TOTAL 373 TiB 364 TiB 9.3 TiB 9.3 TiB 2.50
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 115 TiB
.rgw.root 2 32 3.6 KiB 8 1.5 MiB 0 115 TiB
default.rgw.log 3 32 3.4 KiB 207 6 MiB 0 115 TiB
default.rgw.control 4 32 0 B 8 0 B 0 115 TiB
default.rgw.meta 5 32 0 B 0 0 B 0 115 TiB
rbd 6 32 0 B 0 0 B 0 115 TiB
```
On Mon, Apr 3, 2023 at 10:25 PM Work Ceph <[email protected]>
wrote:
> Hello guys!
>
>
> We noticed an unexpected situation. In a recently deployed Ceph cluster we
> are seeing a raw usage, that is a bit odd. We have the following setup:
>
>
> We have a new cluster with 5 nodes with the following setup:
>
> - 128 GB of RAM
> - 2 cpus Intel(R) Intel Xeon Silver 4210R
> - 1 NVME of 2 TB for the rocks DB caching
> - 5 HDDs of 14TB
> - 1 NIC dual port of 25GiB in BOND mode.
>
>
> Right after deploying the Ceph cluster, we see a raw usage of about 9TiB.
> However, no load has been applied onto the cluster. Have you guys seen such
> a situation? Or, can you guys help understand it?
>
>
> We are using Ceph Octopus, and we have set the following configurations:
>
> ```
>
> ceph_conf_overrides:
>
> global:
>
> osd pool default size: 3
>
> osd pool default min size: 1
>
> osd pool default pg autoscale mode: "warn"
>
> perf: true
>
> rocksdb perf: true
>
> mon:
>
> mon osd down out interval: 120
>
> osd:
>
> bluestore min alloc size hdd: 65536
>
>
> ```
>
>
> Any tip or help on how to explain this situation is welcome!
>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]