Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'


[cid:imagef66bfa.JPG@59a896f8.4fb4801c]<>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.


From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor;
Subject: RE: too many PGs per OSD when pg_num = 256??

How many pools do you have?  How many pgs does your total cluster have, not 
just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs 
per osd is too many.
From: ceph-users [] on behalf of Andrus, Brian 
Contractor []
Sent: Thursday, September 22, 2016 9:33 AM
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??


I am getting a warning:

     health HEALTH_WARN
            too many PGs per OSD (377 > max 300)
            pool cephfs_data has many more objects per pg than average (too few 

yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
pgp_num: 256

How does something like this happen?
I did create a radosgw several weeks ago and have put a single file in it for 
testing, but that is it. It only started giving the warning a couple days ago.

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

ceph-users mailing list

Reply via email to