Re: [ceph-users] Bluestore increased disk usage

2019-02-18 Thread Jan Kasprzak
Jakub Jaszewski wrote:
: Hi Yenya,
: 
: I guess Ceph adds the size of all  your data.db devices to the cluster
: total used space.

Jakub,

thanks for the hint. The disk usage increase almost corresponds
to that - I have added about 7.5 TB of data.db devices with the last
batch of OSDs.

Sincerely,

-Yenya

: pt., 8 lut 2019, 10:11: Jan Kasprzak  napisał(a):
: 
: > Hello, ceph users,
: >
: > I moved my cluster to bluestore (Ceph Mimic), and now I see the increased
: > disk usage. From ceph -s:
: >
: > pools:   8 pools, 3328 pgs
: > objects: 1.23 M objects, 4.6 TiB
: > usage:   23 TiB used, 444 TiB / 467 TiB avail
: >
: > I use 3-way replication of my data, so I would expect the disk usage
: > to be around 14 TiB. Which was true when I used filestore-based Luminous
: > OSDs
: > before. Why the disk usage now is 23 TiB?
: >
: > If I remember it correctly (a big if!), the disk usage was about the same
: > when I originally moved the data to empty bluestore OSDs by changing the
: > crush rule, but went up after I have added more bluestore OSDs and the
: > cluster
: > rebalanced itself.
: >
: > Could it be some miscalculation of free space in bluestore? Also, could it
: > be
: > related to the HEALTH_ERR backfill_toofull problem discused here in the
: > other
: > thread?
: >
: > Thanks,
: >
: > -Yenya
: >
: > --
: > | Jan "Yenya" Kasprzak 
: > |
: > | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5
: > |
: >  This is the world we live in: the way to deal with computers is to google
: >  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
: > ___
: > ceph-users mailing list
: > ceph-users@lists.ceph.com
: > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
: >

-- 
| Jan "Yenya" Kasprzak  |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bluestore increased disk usage

2019-02-10 Thread Jakub Jaszewski
Hi Yenya,

I guess Ceph adds the size of all  your data.db devices to the cluster
total used space.

Regards,
Jakub


pt., 8 lut 2019, 10:11: Jan Kasprzak  napisał(a):

> Hello, ceph users,
>
> I moved my cluster to bluestore (Ceph Mimic), and now I see the increased
> disk usage. From ceph -s:
>
> pools:   8 pools, 3328 pgs
> objects: 1.23 M objects, 4.6 TiB
> usage:   23 TiB used, 444 TiB / 467 TiB avail
>
> I use 3-way replication of my data, so I would expect the disk usage
> to be around 14 TiB. Which was true when I used filestore-based Luminous
> OSDs
> before. Why the disk usage now is 23 TiB?
>
> If I remember it correctly (a big if!), the disk usage was about the same
> when I originally moved the data to empty bluestore OSDs by changing the
> crush rule, but went up after I have added more bluestore OSDs and the
> cluster
> rebalanced itself.
>
> Could it be some miscalculation of free space in bluestore? Also, could it
> be
> related to the HEALTH_ERR backfill_toofull problem discused here in the
> other
> thread?
>
> Thanks,
>
> -Yenya
>
> --
> | Jan "Yenya" Kasprzak 
> |
> | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5
> |
>  This is the world we live in: the way to deal with computers is to google
>  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Bluestore increased disk usage

2019-02-08 Thread Jan Kasprzak
Hello, ceph users,

I moved my cluster to bluestore (Ceph Mimic), and now I see the increased
disk usage. From ceph -s:

pools:   8 pools, 3328 pgs
objects: 1.23 M objects, 4.6 TiB
usage:   23 TiB used, 444 TiB / 467 TiB avail

I use 3-way replication of my data, so I would expect the disk usage
to be around 14 TiB. Which was true when I used filestore-based Luminous OSDs
before. Why the disk usage now is 23 TiB?

If I remember it correctly (a big if!), the disk usage was about the same
when I originally moved the data to empty bluestore OSDs by changing the
crush rule, but went up after I have added more bluestore OSDs and the cluster
rebalanced itself.

Could it be some miscalculation of free space in bluestore? Also, could it be
related to the HEALTH_ERR backfill_toofull problem discused here in the other
thread?

Thanks,

-Yenya

-- 
| Jan "Yenya" Kasprzak  |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com