Do you by any chance have your OSDs placed at a local directory path rather
than on a non utilized physical disk?
No, I have 18 Disks per Server. Each OSD is mapped to a physical disk.
Here in the output of one server:
ansible@zrh-srv-m-cph02:~$ df -h
Filesystem Size Used Avail
Hi!
Do you by any chance have your OSDs placed at a local directory path
rather than on a non utilized physical disk?
If I remember correctly from a similar setup that I had performed in
the past the ceph df command accounts for the entire disk and not just
for the OSD data directory. I am
2015-03-27 18:27 GMT+01:00 Gregory Farnum g...@gregs42.com:
Ceph has per-pg and per-OSD metadata overhead. You currently have 26000 PGs,
suitable for use on a cluster of the order of 260 OSDs. You have placed
almost 7GB of data into it (21GB replicated) and have about 7GB of
additional
I will start now to push a lot of data into the cluster to see if the
metadata grows a lot or stays costant.
There is a way to clean up old metadata ?
I pushed a lot of more data to the cluster. Then I lead the cluster
sleep for the night.
This morning I find this values:
6841 MB data
25814
Thanks for the answer. Now the meaning of MB data and MB used is
clear, and if all the pools have size=3 I expect a ratio 1 to 3 of the
two values.
I still can't understand why MB used is so big in my setup.
All my pools are size =3 but the ratio MB data and MB used is 1 to
5 instead of 1 to 3.
On Thu, Mar 26, 2015 at 2:56 AM, Saverio Proto ziopr...@gmail.com wrote:
Thanks for the answer. Now the meaning of MB data and MB used is
clear, and if all the pools have size=3 I expect a ratio 1 to 3 of the
two values.
I still can't understand why MB used is so big in my setup.
All my
You just need to go look at one of your OSDs and see what data is
stored on it. Did you configure things so that the journals are using
a file on the same storage disk? If so, *that* is why the data used
is large.
I followed your suggestion and this is the result of my trobleshooting.
Each