Re: [ceph-users] total storage size available in my CEPH setup?

2017-03-15 Thread Christian Balzer

Hello,

On Wed, 15 Mar 2017 21:36:00 + James Okken wrote:

> Thanks gentlemen,
> 
> I hope to add more OSD since we will need a good deal more than 2.3TB and I 
> fo want to leave free space / margins.
> 
> I am also thinking of reducing the replication to2 .
>  I am sure I can google how to do that. But I am sure most of my results are 
> going to be people telling me not to do it.

Mostly for good reasons, but that is quite diminished in your RAID'ed OSDs.

> Can you direct me to a good tutorial on how to do so.
> 
No such thing, but you already must have changed your configuration, as
your pools are min_size 1, which is not the default.
Changing them to size=2 should do the trick.

Christian
> 
> And, youre are right, I am a beginner.
> 
> James Okken
> Lab Manager
> Dialogic Research Inc.
> 4 Gatehall Drive
> Parsippany
> NJ 07054
> USA
> 
> Tel:   973 967 5179
> Email:   james.ok...@dialogic.com
> Web:    www.dialogic.com – The Network Fuel Company
> 
> This e-mail is intended only for the named recipient(s) and may contain 
> information that is privileged, confidential and/or exempt from disclosure 
> under applicable law. No waiver of privilege, confidence or otherwise is 
> intended by virtue of communication via the internet. Any unauthorized use, 
> dissemination or copying is strictly prohibited. If you have received this 
> e-mail in error, or are not named as a recipient, please immediately notify 
> the sender and destroy all copies of this e-mail.
> 
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> Maxime Guyot
> Sent: Tuesday, March 14, 2017 7:29 AM
> To: Christian Balzer; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] total storage size available in my CEPH setup?
> 
> Hi,
> 
> >> My question is how much total CEPH storage does this allow me? Only 2.3TB? 
> >> or does the way CEPH duplicates data enable more than 1/3 of the storage?  
> > 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit. 
> >  
> 
> To expand on this, you probably want to keep some margins and not run at your 
> cluster 100% :) (especially if you are running RBD with thin provisioning). 
> By default, “ceph status” will issue a warning at 85% full (osd nearfull 
> ratio). You should also consider that you need some free space for auto 
> healing to work (if you plan to use more than 3 OSDs on a size=3 pool).
> 
> Cheers,
> Maxime 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] total storage size available in my CEPH setup?

2017-03-15 Thread James Okken
Thanks gentlemen,

I hope to add more OSD since we will need a good deal more than 2.3TB and I fo 
want to leave free space / margins.

I am also thinking of reducing the replication to2 .
 I am sure I can google how to do that. But I am sure most of my results are 
going to be people telling me not to do it.
Can you direct me to a good tutorial on how to do so.


And, youre are right, I am a beginner.

James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA

Tel:   973 967 5179
Email:   james.ok...@dialogic.com
Web:    www.dialogic.com – The Network Fuel Company

This e-mail is intended only for the named recipient(s) and may contain 
information that is privileged, confidential and/or exempt from disclosure 
under applicable law. No waiver of privilege, confidence or otherwise is 
intended by virtue of communication via the internet. Any unauthorized use, 
dissemination or copying is strictly prohibited. If you have received this 
e-mail in error, or are not named as a recipient, please immediately notify the 
sender and destroy all copies of this e-mail.

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Maxime 
Guyot
Sent: Tuesday, March 14, 2017 7:29 AM
To: Christian Balzer; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] total storage size available in my CEPH setup?

Hi,

>> My question is how much total CEPH storage does this allow me? Only 2.3TB? 
>> or does the way CEPH duplicates data enable more than 1/3 of the storage?
> 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit.

To expand on this, you probably want to keep some margins and not run at your 
cluster 100% :) (especially if you are running RBD with thin provisioning). By 
default, “ceph status” will issue a warning at 85% full (osd nearfull ratio). 
You should also consider that you need some free space for auto healing to work 
(if you plan to use more than 3 OSDs on a size=3 pool).

Cheers,
Maxime 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] total storage size available in my CEPH setup?

2017-03-14 Thread Maxime Guyot
Hi,

>> My question is how much total CEPH storage does this allow me? Only 2.3TB? 
>> or does the way CEPH duplicates data enable more than 1/3 of the storage?
> 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit.

To expand on this, you probably want to keep some margins and not run at your 
cluster 100% :) (especially if you are running RBD with thin provisioning). By 
default, “ceph status” will issue a warning at 85% full (osd nearfull ratio). 
You should also consider that you need some free space for auto healing to work 
(if you plan to use more than 3 OSDs on a size=3 pool).

Cheers,
Maxime 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] total storage size available in my CEPH setup?

2017-03-13 Thread Christian Balzer

Hello,

On Mon, 13 Mar 2017 21:32:45 + James Okken wrote:

> Hi all,
> 
> I have a 3 storage node openstack setup using CEPH.
> I believe that means I have 3 OSDs, as each storage node has a one of 3 fiber 
> channel storage locations mounted.

You use "believe" a lot, so I'm assuming you're quite new and unfamiliar
with Ceph.
Reading docs and google are your friends.

> The storage media behind each node is actually single 7TB HP fiber channel 
> MSA array.
> The best performance configuration for the hard drives in the MSA just 
> happened to be 3x 2.3TB RAID10's. And that matched nicely to the 
> 3xStorageNode/OSD of the CEPH setup.

A pretty unusual approach, not that others (including me) have done similar
things.
Having just 3 OSDs is iffy, since there are corner cases where Ceph may
not be able to distribute PGs (using default parameters) with such a small
pool of OSDs. 

> I believe my replication factor is 3.
> 
You answered that yourself in the dump, a
"ceph osd pool ls detail"
would have been more elegant and yielded the same information.

You also have a min_size of 1, which can be problematic (search the
archives), but with a really small cluster that may still be advantageous.
Lastly, since your OSDs are RAIDs and thus very reliable, a replication of
2 is feasible.

> My question is how much total CEPH storage does this allow me? Only 2.3TB? or 
> does the way CEPH duplicates data enable more than 1/3 of the storage?

3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit.

> A follow up question would be what is the best way to tell, thru CEPH, the 
> space used and space free? Thanks!!
> 
Well, what do you use with *ix, "df", don'tcha?
"ceph df {detail}"

Christian

> root@node-1:/var/log# ceph osd tree
> ID WEIGHT  TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 6.53998 root default
> -5 2.17999 host node-28
> 3 2.17999 osd.3 up  1.0  1.0
> -6 2.17999 host node-30
> 4 2.17999 osd.4 up  1.0  1.0
> -7 2.17999 host node-31
> 5 2.17999 osd.5 up  1.0  1.0
> 0   0 osd.0   down0  1.0
> 1   0 osd.1   down0  1.0
> 2   0 osd.2   down0  1.0
> 
> 
> 
> ##
> root@node-1:/var/log# ceph osd lspools
> 0 rbd,2 volumes,3 backups,4 .rgw.root,5 .rgw.control,6 .rgw,7 .rgw.gc,8 
> .users.uid,9 .users,10 compute,11 images,
> 
> 
> 
> ##
> root@node-1:/var/log# ceph osd dump
> epoch 216
> fsid d06d61b0-1cd0-4e1a-ac20-67972d0e1fde
> created 2016-10-11 14:15:05.638099
> modified 2017-03-09 14:45:01.030678
> flags
> pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
> pool 2 'volumes' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 130 flags hashpspool stripe_width 0
> removed_snaps [1~5]
> pool 3 'backups' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool stripe_width 0
> pool 4 '.rgw.root' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0
> pool 5 '.rgw.control' replicated size 3 min_size 1 crush_ruleset 0 
> object_hash rjenkins pg_num 64 pgp_num 64 last_change 18 owner 
> 18446744073709551615 flags hashpspool stripe_width 0
> pool 6 '.rgw' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 20 owner 18446744073709551615 flags 
> hashpspool stripe_width 0
> pool 7 '.rgw.gc' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 21 flags hashpspool stripe_width 0
> pool 8 '.users.uid' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 22 owner 18446744073709551615 flags 
> hashpspool stripe_width 0
> pool 9 '.users' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 24 flags hashpspool stripe_width 0
> pool 10 'compute' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 216 flags hashpspool stripe_width 0
> removed_snaps [1~37]
> pool 11 'images' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 189 flags hashpspool stripe_width 0
> removed_snaps [1~3,5~8,f~4,14~2,18~2,1c~1,1e~1]
> max_osd 6
> osd.0 down out weight 0 up_from 48 up_thru 50 down_at 52 last_clean_interval 
> [44,45) 192.168.0.9:6800/4485 192.168.1.4:6800/4485 192.168.1.4:6801/4485 
> 192.168.0.9:6801/4485 exists,new
> osd.1 down out weight 0 up_from 10 up_thru 48 down_at 50 last_clean_interval 
> [5,8) 192.168.0.7:6800/60912 192.168.1.6:6801/60912 192.168.1.6:6802/60912 
> 

[ceph-users] total storage size available in my CEPH setup?

2017-03-13 Thread James Okken
Hi all,

I have a 3 storage node openstack setup using CEPH.
I believe that means I have 3 OSDs, as each storage node has a one of 3 fiber 
channel storage locations mounted.
The storage media behind each node is actually single 7TB HP fiber channel MSA 
array.
The best performance configuration for the hard drives in the MSA just happened 
to be 3x 2.3TB RAID10's. And that matched nicely to the 3xStorageNode/OSD of 
the CEPH setup.
I believe my replication factor is 3.

My question is how much total CEPH storage does this allow me? Only 2.3TB? or 
does the way CEPH duplicates data enable more than 1/3 of the storage?
A follow up question would be what is the best way to tell, thru CEPH, the 
space used and space free? Thanks!!

root@node-1:/var/log# ceph osd tree
ID WEIGHT  TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 6.53998 root default
-5 2.17999 host node-28
3 2.17999 osd.3 up  1.0  1.0
-6 2.17999 host node-30
4 2.17999 osd.4 up  1.0  1.0
-7 2.17999 host node-31
5 2.17999 osd.5 up  1.0  1.0
0   0 osd.0   down0  1.0
1   0 osd.1   down0  1.0
2   0 osd.2   down0  1.0



##
root@node-1:/var/log# ceph osd lspools
0 rbd,2 volumes,3 backups,4 .rgw.root,5 .rgw.control,6 .rgw,7 .rgw.gc,8 
.users.uid,9 .users,10 compute,11 images,



##
root@node-1:/var/log# ceph osd dump
epoch 216
fsid d06d61b0-1cd0-4e1a-ac20-67972d0e1fde
created 2016-10-11 14:15:05.638099
modified 2017-03-09 14:45:01.030678
flags
pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 2 'volumes' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 130 flags hashpspool stripe_width 0
removed_snaps [1~5]
pool 3 'backups' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool stripe_width 0
pool 4 '.rgw.root' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0
pool 5 '.rgw.control' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 18 owner 18446744073709551615 flags 
hashpspool stripe_width 0
pool 6 '.rgw' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 20 owner 18446744073709551615 flags hashpspool 
stripe_width 0
pool 7 '.rgw.gc' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 21 flags hashpspool stripe_width 0
pool 8 '.users.uid' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 22 owner 18446744073709551615 flags 
hashpspool stripe_width 0
pool 9 '.users' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 24 flags hashpspool stripe_width 0
pool 10 'compute' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 216 flags hashpspool stripe_width 0
removed_snaps [1~37]
pool 11 'images' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 189 flags hashpspool stripe_width 0
removed_snaps [1~3,5~8,f~4,14~2,18~2,1c~1,1e~1]
max_osd 6
osd.0 down out weight 0 up_from 48 up_thru 50 down_at 52 last_clean_interval 
[44,45) 192.168.0.9:6800/4485 192.168.1.4:6800/4485 192.168.1.4:6801/4485 
192.168.0.9:6801/4485 exists,new
osd.1 down out weight 0 up_from 10 up_thru 48 down_at 50 last_clean_interval 
[5,8) 192.168.0.7:6800/60912 192.168.1.6:6801/60912 192.168.1.6:6802/60912 
192.168.0.7:6801/60912 exists,new
osd.2 down out weight 0 up_from 10 up_thru 48 down_at 50 last_clean_interval 
[5,8) 192.168.0.6:6800/61013 192.168.1.7:6800/61013 192.168.1.7:6801/61013 
192.168.0.6:6801/61013 exists,new
osd.3 up   in  weight 1 up_from 192 up_thru 201 down_at 190 last_clean_interval 
[83,191) 192.168.0.9:6800/2634194 192.168.1.7:6802/3634194 
192.168.1.7:6803/3634194 192.168.0.9:6802/3634194 exists,up 
28b02052-3196-4203-bec8-ac83a69fcbc5
osd.4 up   in  weight 1 up_from 196 up_thru 201 down_at 194 last_clean_interval 
[80,195) 192.168.0.7:6800/2629319 192.168.1.6:6802/3629319 
192.168.1.6:6803/3629319 192.168.0.7:6802/3629319 exists,up 
124b58e6-1e38-4246-8838-cfc3b88e8a5a
osd.5 up   in  weight 1 up_from 201 up_thru 201 down_at 199 last_clean_interval 
[134,200) 192.168.0.6:6800/5494 192.168.1.4:6802/1005494 
192.168.1.4:6803/1005494 192.168.0.6:6802/1005494 exists,up 
ddfca14e-e6f6-4c48-aa8f-0ebfc765d32f
root@node-1:/var/log#


James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA

Tel:   973 967 5179
Email: