Correct, sorry, I have just read the first question and answered too quickly.

As fas as I know the space available is "shared" (the space is a combination of 
OSD drives and crushmap ) between pools using the same device class but you can 
define quota for each pool if needed.
ceph osd pool set-quota <poolname> max_objects|max_bytes <val>        set 
object or byte limit on pool
ceph osd pool get-quota <poolname>                                    obtain 
object or byte limits for pool

You can use "ceph df detail" to see you pools usage including quota. As the 
space is "shared", you can't determine a max size for just one pool (except if 
you have only one pool).

# ceph df detail
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED     OBJECTS
    144T      134T       10254G          6.93       1789k
POOLS:
    NAME                    ID     QUOTA OBJECTS     QUOTA BYTES     USED       
%USED     MAX AVAIL     OBJECTS     DIRTY     READ      WRITE      RAW USED
    pool1                   9      N/A               N/A              7131G     
14.70        41369G     1826183     1783k     3847k     14959k       21394G
    pool2                   10     N/A               N/A             24735M     
 0.06        41369G        6236      6236     1559k       226k       74205M
    pool3                   11     N/A               N/A             30188k     
    0        41369G          29        29     1259k      4862k       90564k
    pool4                   12     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    pool5                   13     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    pool6                   14     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    pool7                   15     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    pool8                   16     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    pool9                   17     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    pool10                  18     N/A               N/A                  0     
    0        41369G           0         0         0          0            0
    .rgw.root               19     N/A               N/A               2134     
    0        41369G           6         6       231          6         6402
    default.rgw.control     20     N/A               N/A                  0     
    0        41369G           8         8         0          0            0
    default.rgw.meta        21     N/A               N/A                363     
    0        41369G           2         2        12          3         1089
    default.rgw.log         22     N/A               N/A                  0     
    0        41369G         207       207     8949k      5962k            0

You should seek for used and max sizes for images, not pools.
# rbd disk-usage your_pool/your_image
NAME         PROVISIONED    USED
image-1      51200M         102400k

You can see the total provisioned and used sizes for a whole pool using:
# rbd disk-usage -p your_pool --format json | jq .
{
  "images": [
    {
      "name": "image-1",
      "provisioned_size": 53687091200,
      "used_size": 104857600
    }
  ],
  "total_provisioned_size": 53687091200,
  "total_used_size": 104857600
}

A reminder: most ceph commands can output in json format ( --format=json  or -f 
json), useful with the jq tool.

> Le 20 juil. 2018 à 12:26, si...@turka.nl a écrit :
> 
> Hi Sebastien,
> 
> Your command(s) returns the replication size and not the size in terms of
> bytes.
> 
> I want to see the size of a pool in terms of bytes.
> The MAX AVAIL in "ceph df" is:
> [empty space of an OSD disk with the least empty space] multiplied by
> [amount of OSD]
> 
> That is not what I am looking for.
> 
> Thanks.
> Sinan
> 
>> # for a specific pool:
>> 
>> ceph osd pool get your_pool_name size
>> 
>> 
>>> Le 20 juil. 2018 à 10:32, Sébastien VIGNERON
>>> <sebastien.vigne...@criann.fr <mailto:sebastien.vigne...@criann.fr>> a 
>>> écrit :
>>> 
>>> #for all pools:
>>> ceph osd pool ls detail
>>> 
>>> 
>>>> Le 20 juil. 2018 à 09:02, si...@turka.nl <mailto:si...@turka.nl> a écrit 
>>>> :
>>>> 
>>>> Hi,
>>>> 
>>>> How can I see the size of a pool? When I create a new empty pool I can
>>>> see
>>>> the capacity of the pool using 'ceph df', but as I start putting data
>>>> in
>>>> the pool the capacity is decreasing.
>>>> 
>>>> So the capacity in 'ceph df' is returning the space left on the pool
>>>> and
>>>> not the 'capacity size'.
>>>> 
>>>> Thanks!
>>>> 
>>>> Sinan
>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>> 
>> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to