Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread David Turner
Nothing weird, I have incomplete data and data based on bad rounding errors.  
Your cluster has too many pgs and most of your pools will likely need to be 
recreated with less.  Have you poked around on the pg calc tool?



[cid:image7765b4.JPG@f8b09f63.48b63161]   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: Andrus, Brian Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 11:52 AM
To: David Turner; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

Hmm. Something happened then. I only have 20 OSDs. What may cause that?

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Thursday, September 22, 2016 10:04 AM
To: Andrus, Brian Contractor ; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

So you have 3,520 pgs.  Assuming all of your pools are using 3 replicas, and 
using the 377 pgs/osd in your health_warn state, that would mean your cluster 
has 28 osds.

When you calculate how many pgs a pool should have, you need to account for how 
many osds you have, how much percentage of data each pool will account for out 
of your entire cluster, and go from there.  The ceph PG Calc tool will be an 
excellent resource to help you figure out how many pgs each pool should have.  
It takes all of those factors into account.  http://ceph.com/pgcalc/

[cid:image001.jpg@01D214B9.781F53F0]

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: Andrus, Brian Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 10:41 AM
To: David Turner; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
David,
I have 15 pools:
# ceph osd lspools|sed 's/,/\n/g'
0 rbd
1 cephfs_data
2 cephfs_metadata
3 vmimages
14 .rgw.root
15 default.rgw.control
16 default.rgw.data.root
17 default.rgw.gc
18 default.rgw.log
19 default.rgw.users.uid
20 default.rgw.users.keys
21 default.rgw.users.email
22 default.rgw.meta
23 default.rgw.buckets.index
24 default.rgw.buckets.data
# ceph -s | grep -Eo '[0-9]+ pgs'
3520 pgs



Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Thursday, September 22, 2016 8:57 AM
To: Andrus, Brian Contractor >; 
ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'

[cid:image001.jpg@01D214B9.781F53F0]

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor; 
ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.

From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian 
Contractor [bdand...@nps.edu]
Sent: 

Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread Andrus, Brian Contractor
Hmm. Something happened then. I only have 20 OSDs. What may cause that?

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Thursday, September 22, 2016 10:04 AM
To: Andrus, Brian Contractor ; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

So you have 3,520 pgs.  Assuming all of your pools are using 3 replicas, and 
using the 377 pgs/osd in your health_warn state, that would mean your cluster 
has 28 osds.

When you calculate how many pgs a pool should have, you need to account for how 
many osds you have, how much percentage of data each pool will account for out 
of your entire cluster, and go from there.  The ceph PG Calc tool will be an 
excellent resource to help you figure out how many pgs each pool should have.  
It takes all of those factors into account.  http://ceph.com/pgcalc/

[cid:image001.jpg@01D214B9.781F53F0]

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: Andrus, Brian Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 10:41 AM
To: David Turner; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
David,
I have 15 pools:
# ceph osd lspools|sed 's/,/\n/g'
0 rbd
1 cephfs_data
2 cephfs_metadata
3 vmimages
14 .rgw.root
15 default.rgw.control
16 default.rgw.data.root
17 default.rgw.gc
18 default.rgw.log
19 default.rgw.users.uid
20 default.rgw.users.keys
21 default.rgw.users.email
22 default.rgw.meta
23 default.rgw.buckets.index
24 default.rgw.buckets.data
# ceph -s | grep -Eo '[0-9]+ pgs'
3520 pgs



Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Thursday, September 22, 2016 8:57 AM
To: Andrus, Brian Contractor >; 
ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'

[cid:image001.jpg@01D214B9.781F53F0]

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor; 
ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.

From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian 
Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 9:33 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??
All,

I am getting a warning:

 health HEALTH_WARN
too many PGs per OSD (377 > max 300)
pool cephfs_data has many more objects per pg than average (too few 
pgs?)

yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
pgp_num: 256

How does something like this happen?
I did create a radosgw several weeks ago and have put a single file in it for 
testing, but that is it. It only started giving the warning a couple days ago.

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread David Turner
So you have 3,520 pgs.  Assuming all of your pools are using 3 replicas, and 
using the 377 pgs/osd in your health_warn state, that would mean your cluster 
has 28 osds.

When you calculate how many pgs a pool should have, you need to account for how 
many osds you have, how much percentage of data each pool will account for out 
of your entire cluster, and go from there.  The ceph PG Calc tool will be an 
excellent resource to help you figure out how many pgs each pool should have.  
It takes all of those factors into account.  http://ceph.com/pgcalc/



[cid:imagee57470.JPG@6f863b6b.4eb0bd0e]   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: Andrus, Brian Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 10:41 AM
To: David Turner; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

David,
I have 15 pools:
# ceph osd lspools|sed 's/,/\n/g'
0 rbd
1 cephfs_data
2 cephfs_metadata
3 vmimages
14 .rgw.root
15 default.rgw.control
16 default.rgw.data.root
17 default.rgw.gc
18 default.rgw.log
19 default.rgw.users.uid
20 default.rgw.users.keys
21 default.rgw.users.email
22 default.rgw.meta
23 default.rgw.buckets.index
24 default.rgw.buckets.data
# ceph -s | grep -Eo '[0-9]+ pgs'
3520 pgs



Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Thursday, September 22, 2016 8:57 AM
To: Andrus, Brian Contractor ; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'

[cid:image001.jpg@01D214B5.3D5480F0]

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor; 
ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.

From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian 
Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 9:33 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??
All,

I am getting a warning:

 health HEALTH_WARN
too many PGs per OSD (377 > max 300)
pool cephfs_data has many more objects per pg than average (too few 
pgs?)

yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
pgp_num: 256

How does something like this happen?
I did create a radosgw several weeks ago and have put a single file in it for 
testing, but that is it. It only started giving the warning a couple days ago.

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread Andrus, Brian Contractor
David,
I have 15 pools:
# ceph osd lspools|sed 's/,/\n/g'
0 rbd
1 cephfs_data
2 cephfs_metadata
3 vmimages
14 .rgw.root
15 default.rgw.control
16 default.rgw.data.root
17 default.rgw.gc
18 default.rgw.log
19 default.rgw.users.uid
20 default.rgw.users.keys
21 default.rgw.users.email
22 default.rgw.meta
23 default.rgw.buckets.index
24 default.rgw.buckets.data
# ceph -s | grep -Eo '[0-9]+ pgs'
3520 pgs



Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: Thursday, September 22, 2016 8:57 AM
To: Andrus, Brian Contractor ; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'

[cid:image001.jpg@01D214B5.3D5480F0]

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor; 
ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.

From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian 
Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 9:33 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??
All,

I am getting a warning:

 health HEALTH_WARN
too many PGs per OSD (377 > max 300)
pool cephfs_data has many more objects per pg than average (too few 
pgs?)

yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
pgp_num: 256

How does something like this happen?
I did create a radosgw several weeks ago and have put a single file in it for 
testing, but that is it. It only started giving the warning a couple days ago.

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread David Turner
Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'



[cid:imagef66bfa.JPG@59a896f8.4fb4801c]   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??

How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.

From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian 
Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 9:33 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??

All,

I am getting a warning:

 health HEALTH_WARN
too many PGs per OSD (377 > max 300)
pool cephfs_data has many more objects per pg than average (too few 
pgs?)

yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
pgp_num: 256

How does something like this happen?
I did create a radosgw several weeks ago and have put a single file in it for 
testing, but that is it. It only started giving the warning a couple days ago.

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread David Turner
How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.



[cid:image07e234.JPG@ff28ec6b.45a086ad]   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian 
Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 9:33 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??

All,

I am getting a warning:

 health HEALTH_WARN
too many PGs per OSD (377 > max 300)
pool cephfs_data has many more objects per pg than average (too few 
pgs?)

yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
pgp_num: 256

How does something like this happen?
I did create a radosgw several weeks ago and have put a single file in it for 
testing, but that is it. It only started giving the warning a couple days ago.

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com