Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Update!

Yeah, That was the problem. I zap the disks (purge) and re-create them 
according to official documentation. Now everything is OK.

I can see all disk and total sizes properly.

Let's see if this will bring any performance improvements if we compare to 
previous standard schema (usinbg jewel).

Thanks!,
Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 6:17 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 17:03 schreef gen...@gencgiyen.com:
> 
> 
> I used this methods:
> 
> $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 
>  (one from 09th server one from 10th server..)
> 
> and then;
> 
> $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...
> 

You should use a whole disk, not a partition. So /dev/sdb without the '1'  at 
the end.

> This is my second creation for ceph cluster. At first I used bluestore. This 
> time i did not use bluestore (also removed from conf file). Still seen as 
> 200GB.
> 
> How can I make sure BlueStore is disabled (even if i not put any command).
> 

Just use BlueStore with Luminous as all testing is welcome! But in this case 
you invoked the command with the wrong parameters.

Wido

> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 5:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi Wido,
> > 
> > Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> > 
> > First let me gave you df -h:
> > 
> > /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> > /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> > /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> > /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> > /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> > /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> > /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> > /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> > /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> > /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> > 
> > 
> > Then here is my results from ceph df commands:
> > 
> > ceph df
> > 
> > GLOBAL:
> > SIZE AVAIL RAW USED %RAW USED
> > 200G  179G   21381M 10.44
> > POOLS:
> > NAMEID USED %USED MAX AVAIL OBJECTS
> > rbd 0 0 086579M   0
> > cephfs_data 1 0 086579M   0
> > cephfs_metadata 2  2488 086579M  21
> > 
> 
> Ok, that's odd. But I think these disks are using BlueStore since that's what 
> Luminous defaults to.
> 
> The partitions seem to be mixed up, so can you check on how you created the 
> OSDs? Was that with ceph-disk? If so, what additional arguments did you use?
> 
> Wido
> 
> > ceph osd df
> > ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
> >  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
> >  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
> >  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
> >  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
> >  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> > 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> > 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> > 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> > 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
> >  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
> >  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
> >  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
> >  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
> >  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> > 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> > 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> > 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> > 19 0.00980  1.0 10240M  106

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
When I use /dev/sdb or /dev/sdc (the whole disk) i get errors like this:

ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device 
/dev/sdb: Line is truncated:
  RuntimeError: command returned non-zero exit status: 1
  RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate 
--mark-init systemd --mount /dev/sdb

Are you sure that we need to remove "1" from at the end?

Can you point me on any doc for this because ceph's own documentation also 
shows as sdb1 sdc1...

If you have any sample, I will be very happy :)

-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 6:17 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 17:03 schreef gen...@gencgiyen.com:
> 
> 
> I used this methods:
> 
> $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 
>  (one from 09th server one from 10th server..)
> 
> and then;
> 
> $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...
> 

You should use a whole disk, not a partition. So /dev/sdb without the '1'  at 
the end.

> This is my second creation for ceph cluster. At first I used bluestore. This 
> time i did not use bluestore (also removed from conf file). Still seen as 
> 200GB.
> 
> How can I make sure BlueStore is disabled (even if i not put any command).
> 

Just use BlueStore with Luminous as all testing is welcome! But in this case 
you invoked the command with the wrong parameters.

Wido

> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 5:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi Wido,
> > 
> > Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> > 
> > First let me gave you df -h:
> > 
> > /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> > /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> > /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> > /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> > /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> > /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> > /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> > /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> > /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> > /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> > 
> > 
> > Then here is my results from ceph df commands:
> > 
> > ceph df
> > 
> > GLOBAL:
> > SIZE AVAIL RAW USED %RAW USED
> > 200G  179G   21381M 10.44
> > POOLS:
> > NAMEID USED %USED MAX AVAIL OBJECTS
> > rbd 0 0 086579M   0
> > cephfs_data 1 0 086579M   0
> > cephfs_metadata 2  2488 086579M  21
> > 
> 
> Ok, that's odd. But I think these disks are using BlueStore since that's what 
> Luminous defaults to.
> 
> The partitions seem to be mixed up, so can you check on how you created the 
> OSDs? Was that with ceph-disk? If so, what additional arguments did you use?
> 
> Wido
> 
> > ceph osd df
> > ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
> >  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
> >  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
> >  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
> >  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
> >  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> > 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> > 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> > 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> > 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
> >  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
> >  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
> >  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
> >  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
> >  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> > 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.0

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread Wido den Hollander

> Op 17 juli 2017 om 17:03 schreef gen...@gencgiyen.com:
> 
> 
> I used this methods:
> 
> $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1  
> (one from 09th server one from 10th server..)
> 
> and then;
> 
> $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...
> 

You should use a whole disk, not a partition. So /dev/sdb without the '1'  at 
the end.

> This is my second creation for ceph cluster. At first I used bluestore. This 
> time i did not use bluestore (also removed from conf file). Still seen as 
> 200GB.
> 
> How can I make sure BlueStore is disabled (even if i not put any command).
> 

Just use BlueStore with Luminous as all testing is welcome! But in this case 
you invoked the command with the wrong parameters.

Wido

> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com] 
> Sent: Monday, July 17, 2017 5:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi Wido,
> > 
> > Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> > 
> > First let me gave you df -h:
> > 
> > /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> > /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> > /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> > /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> > /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> > /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> > /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> > /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> > /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> > /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> > 
> > 
> > Then here is my results from ceph df commands:
> > 
> > ceph df
> > 
> > GLOBAL:
> > SIZE AVAIL RAW USED %RAW USED
> > 200G  179G   21381M 10.44
> > POOLS:
> > NAMEID USED %USED MAX AVAIL OBJECTS
> > rbd 0 0 086579M   0
> > cephfs_data 1 0 086579M   0
> > cephfs_metadata 2  2488 086579M  21
> > 
> 
> Ok, that's odd. But I think these disks are using BlueStore since that's what 
> Luminous defaults to.
> 
> The partitions seem to be mixed up, so can you check on how you created the 
> OSDs? Was that with ceph-disk? If so, what additional arguments did you use?
> 
> Wido
> 
> > ceph osd df
> > ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
> >  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
> >  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
> >  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
> >  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
> >  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> > 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> > 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> > 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> > 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
> >  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
> >  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
> >  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
> >  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
> >  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> > 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> > 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> > 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> > 19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
> >   TOTAL   200G 21381M  179G 10.44
> > MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> > 
> > 
> > -Gencer.
> > 
> > -Original Message-
> > From: Wido den Hollander [mailto:w...@42on.com]
> > Sent: Monday, July 17, 2017 4:57 PM
> > To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> > Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong
> > 
> > 
> > > Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> > >

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Also one more thing, If I want to use BlueStore how do I let it to know that I 
have more space? Do i need to specify a size at any point?

-Gencer.

-Original Message-
From: gen...@gencgiyen.com [mailto:gen...@gencgiyen.com] 
Sent: Monday, July 17, 2017 6:04 PM
To: 'Wido den Hollander' <w...@42on.com>; 'ceph-users@lists.ceph.com' 
<ceph-users@lists.ceph.com>
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong

I used this methods:

$ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1  (one 
from 09th server one from 10th server..)

and then;

$ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...

This is my second creation for ceph cluster. At first I used bluestore. This 
time i did not use bluestore (also removed from conf file). Still seen as 200GB.

How can I make sure BlueStore is disabled (even if i not put any command).

-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 5:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> 
> 
> Hi Wido,
> 
> Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> 
> First let me gave you df -h:
> 
> /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> 
> 
> Then here is my results from ceph df commands:
> 
> ceph df
> 
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 200G  179G   21381M 10.44
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> rbd 0 0 086579M   0
> cephfs_data 1 0 086579M   0
> cephfs_metadata 2  2488 086579M  21
> 

Ok, that's odd. But I think these disks are using BlueStore since that's what 
Luminous defaults to.

The partitions seem to be mixed up, so can you check on how you created the 
OSDs? Was that with ceph-disk? If so, what additional arguments did you use?

Wido

> ceph osd df
> ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
>  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
>  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
>  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
>  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
>  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
>  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
>  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
>  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
>  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
>  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> 19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
>   TOTAL   200G 21381M  179G 10.44
> MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> 
> 
> -Gencer.
> 
> -----Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 4:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi,
> > 
> >  
> > 
> > I successfully managed to work with ceph jewel. Want to try luminous.
> > 
> >  
> > 
> > I also set experimental bluestore while creating osds. Problem is, I 
> > have 20x3TB hdd in two nodes and i would expect 55TB usable (as on
> > jewel) on luminous but i see 20

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
I used this methods:

$ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1  (one 
from 09th server one from 10th server..)

and then;

$ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...

This is my second creation for ceph cluster. At first I used bluestore. This 
time i did not use bluestore (also removed from conf file). Still seen as 200GB.

How can I make sure BlueStore is disabled (even if i not put any command).

-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 5:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> 
> 
> Hi Wido,
> 
> Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> 
> First let me gave you df -h:
> 
> /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> 
> 
> Then here is my results from ceph df commands:
> 
> ceph df
> 
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 200G  179G   21381M 10.44
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> rbd 0 0 086579M   0
> cephfs_data 1 0 086579M   0
> cephfs_metadata 2  2488 086579M  21
> 

Ok, that's odd. But I think these disks are using BlueStore since that's what 
Luminous defaults to.

The partitions seem to be mixed up, so can you check on how you created the 
OSDs? Was that with ceph-disk? If so, what additional arguments did you use?

Wido

> ceph osd df
> ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
>  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
>  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
>  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
>  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
>  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
>  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
>  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
>  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
>  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
>  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> 19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
>   TOTAL   200G 21381M  179G 10.44
> MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> 
> 
> -Gencer.
> 
> -Original Message-----
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 4:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi,
> > 
> >  
> > 
> > I successfully managed to work with ceph jewel. Want to try luminous.
> > 
> >  
> > 
> > I also set experimental bluestore while creating osds. Problem is, I 
> > have 20x3TB hdd in two nodes and i would expect 55TB usable (as on
> > jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB 
> > space available in total. I see all osds are up and in.
> > 
> >  
> > 
> > 20 osd up; 20 osd in. 0 down.
> > 
> >  
> > 
> > Ceph -s shows HEALTH_OK. I have only one monitor and one mds. 
> > (1/1/1) and it is up:active.
> > 
> >  
> > 
> > ceph osd tree gave me all OSDs in nodes are up and results are 
> > 1.

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread Wido den Hollander

> Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> 
> 
> Hi Wido,
> 
> Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> 
> First let me gave you df -h:
> 
> /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> 
> 
> Then here is my results from ceph df commands:
> 
> ceph df
> 
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 200G  179G   21381M 10.44
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> rbd 0 0 086579M   0
> cephfs_data 1 0 086579M   0
> cephfs_metadata 2  2488 086579M  21
> 

Ok, that's odd. But I think these disks are using BlueStore since that's what 
Luminous defaults to.

The partitions seem to be mixed up, so can you check on how you created the 
OSDs? Was that with ceph-disk? If so, what additional arguments did you use?

Wido

> ceph osd df
> ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
>  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
>  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
>  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
>  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
>  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
>  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
>  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
>  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
>  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
>  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> 19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
>   TOTAL   200G 21381M  179G 10.44
> MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> 
> 
> -Gencer.
> 
> -----Original Message-----
> From: Wido den Hollander [mailto:w...@42on.com] 
> Sent: Monday, July 17, 2017 4:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi,
> > 
> >  
> > 
> > I successfully managed to work with ceph jewel. Want to try luminous.
> > 
> >  
> > 
> > I also set experimental bluestore while creating osds. Problem is, I 
> > have 20x3TB hdd in two nodes and i would expect 55TB usable (as on 
> > jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB 
> > space available in total. I see all osds are up and in.
> > 
> >  
> > 
> > 20 osd up; 20 osd in. 0 down.
> > 
> >  
> > 
> > Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) 
> > and it is up:active.
> > 
> >  
> > 
> > ceph osd tree gave me all OSDs in nodes are up and results are 
> > 1.... I checked via df -h but all disks ahows 2.7TB. Basically 
> > something is wrong.
> > Same settings and followed schema on jewel is successful except luminous.
> > 
> 
> What do these commands show:
> 
> - ceph df
> - ceph osd df
> 
> Might be that you are looking at the wrong numbers.
> 
> Wido
> 
> >  
> > 
> > What might it be?
> > 
> >  
> > 
> > What do you need to know to solve this problem? Why ceph thinks I have 
> > 200GB space only?
> > 
> >  
> > 
> > Thanks,
> > 
> > Gencer.
> > 
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Hi Wido,

Each disk is 3TB SATA (2.8TB seen) but what I got is this:

First let me gave you df -h:

/dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
/dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
/dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
/dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
/dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
/dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
/dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
/dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
/dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
/dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18


Then here is my results from ceph df commands:

ceph df

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
200G  179G   21381M 10.44
POOLS:
NAMEID USED %USED MAX AVAIL OBJECTS
rbd 0 0 086579M   0
cephfs_data 1 0 086579M   0
cephfs_metadata 2  2488 086579M  21

ceph osd df
ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
 0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
 2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
 4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
 6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
 8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
 1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
 3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
 5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
 7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
 9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
  TOTAL   200G 21381M  179G 10.44
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00


-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 4:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> 
> 
> Hi,
> 
>  
> 
> I successfully managed to work with ceph jewel. Want to try luminous.
> 
>  
> 
> I also set experimental bluestore while creating osds. Problem is, I 
> have 20x3TB hdd in two nodes and i would expect 55TB usable (as on 
> jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB 
> space available in total. I see all osds are up and in.
> 
>  
> 
> 20 osd up; 20 osd in. 0 down.
> 
>  
> 
> Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) 
> and it is up:active.
> 
>  
> 
> ceph osd tree gave me all OSDs in nodes are up and results are 
> 1.... I checked via df -h but all disks ahows 2.7TB. Basically something 
> is wrong.
> Same settings and followed schema on jewel is successful except luminous.
> 

What do these commands show:

- ceph df
- ceph osd df

Might be that you are looking at the wrong numbers.

Wido

>  
> 
> What might it be?
> 
>  
> 
> What do you need to know to solve this problem? Why ceph thinks I have 
> 200GB space only?
> 
>  
> 
> Thanks,
> 
> Gencer.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread Wido den Hollander

> Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> 
> 
> Hi,
> 
>  
> 
> I successfully managed to work with ceph jewel. Want to try luminous.
> 
>  
> 
> I also set experimental bluestore while creating osds. Problem is, I have
> 20x3TB hdd in two nodes and i would expect 55TB usable (as on jewel) on
> luminous but i see 200GB. Ceph thinks I have only 200GB space available in
> total. I see all osds are up and in.
> 
>  
> 
> 20 osd up; 20 osd in. 0 down.
> 
>  
> 
> Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) and it
> is up:active.
> 
>  
> 
> ceph osd tree gave me all OSDs in nodes are up and results are 1.... I
> checked via df -h but all disks ahows 2.7TB. Basically something is wrong.
> Same settings and followed schema on jewel is successful except luminous.
> 

What do these commands show:

- ceph df
- ceph osd df

Might be that you are looking at the wrong numbers.

Wido

>  
> 
> What might it be?
> 
>  
> 
> What do you need to know to solve this problem? Why ceph thinks I have 200GB
> space only?
> 
>  
> 
> Thanks,
> 
> Gencer.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Hi,

 

I successfully managed to work with ceph jewel. Want to try luminous.

 

I also set experimental bluestore while creating osds. Problem is, I have
20x3TB hdd in two nodes and i would expect 55TB usable (as on jewel) on
luminous but i see 200GB. Ceph thinks I have only 200GB space available in
total. I see all osds are up and in.

 

20 osd up; 20 osd in. 0 down.

 

Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) and it
is up:active.

 

ceph osd tree gave me all OSDs in nodes are up and results are 1.... I
checked via df -h but all disks ahows 2.7TB. Basically something is wrong.
Same settings and followed schema on jewel is successful except luminous.

 

What might it be?

 

What do you need to know to solve this problem? Why ceph thinks I have 200GB
space only?

 

Thanks,

Gencer.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com