Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Hello, Nick!

Thank you for your reply! I have tested both with setting the replicas
number to 2 and 3, by setting the 'osd pool default size = (2|3)' in the
.conf file. Either I'm doing something incorrectly, or they seem to produce
the same result.

Can you give any troubleshooting advice? I have purged and re-created the
cluster several times, but the result is the same.

Thank you for your help!

Regards,
Bogdan


On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk n...@fisk.me.uk wrote:





  -Original Message-
  From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
  Bogdan SOLGA
  Sent: 19 March 2015 20:51
  To: ceph-users@lists.ceph.com
  Subject: [ceph-users] PGs issue
 
  Hello, everyone!
  I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
  deploy' page, with the following setup:
  • 1 x admin / deploy node;
  • 3 x OSD and MON nodes;
  o each OSD node has 2 x 8 GB HDDs;
  The setup was made using Virtual Box images, on Ubuntu 14.04.2.
  After performing all the steps, the 'ceph health' output lists the
 cluster in the
  HEALTH_WARN state, with the following details:
  HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
  unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per osd
 (10
   min 20)
  The output of 'ceph -s':
  cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
   health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
  stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs
 per
  osd (10  min 20)
   monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
  002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election epoch
  6, quorum 0,1,2 osd-001,osd-002,osd-003
   osdmap e20: 6 osds: 6 up, 6 in
pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
  199 MB used, 18166 MB / 18365 MB avail
64 active+undersized+degraded
 
  I have tried to increase the pg_num and pgp_num to 512, as advised here,
  but Ceph refused to do that, with the following error:
  Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs on
 ~6
  OSDs exceeds per-OSD max of 32)
 
  After changing the pg*_num to 256, as advised here, the warning was
  changed to:
  health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
  undersized
 
  What is the issue behind these warning? and what do I need to do to fix
 it?

 It's basically telling you that you current available OSD's don't meet the
 requirements to suit the number of replica's you have requested.

 What replica size have you configured for that pool?

 
  I'm a newcomer in the Ceph world, so please don't shoot me if this issue
 has
  been answered / discussed countless times before :) I have searched the
  web and the mailing list for the answers, but I couldn't find a valid
 solution.
  Any help is highly appreciated. Thank you!
  Regards,
  Bogdan





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Sahana
HI Bogdan,


Please paste the output of `ceph osd dump` and ceph osd tree`

Thanks
Sahana

On Fri, Mar 20, 2015 at 11:47 AM, Bogdan SOLGA bogdan.so...@gmail.com
wrote:

 Hello, Nick!

 Thank you for your reply! I have tested both with setting the replicas
 number to 2 and 3, by setting the 'osd pool default size = (2|3)' in the
 .conf file. Either I'm doing something incorrectly, or they seem to produce
 the same result.

 Can you give any troubleshooting advice? I have purged and re-created the
 cluster several times, but the result is the same.

 Thank you for your help!

 Regards,
 Bogdan


 On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk n...@fisk.me.uk wrote:





  -Original Message-
  From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
 Of
  Bogdan SOLGA
  Sent: 19 March 2015 20:51
  To: ceph-users@lists.ceph.com
  Subject: [ceph-users] PGs issue
 
  Hello, everyone!
  I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
  deploy' page, with the following setup:
  • 1 x admin / deploy node;
  • 3 x OSD and MON nodes;
  o each OSD node has 2 x 8 GB HDDs;

  The setup was made using Virtual Box images, on Ubuntu 14.04.2.
  After performing all the steps, the 'ceph health' output lists the
 cluster in the
  HEALTH_WARN state, with the following details:
  HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
  unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
 osd (10
   min 20)
  The output of 'ceph -s':
  cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
   health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
  stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs
 per
  osd (10  min 20)
   monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
  002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election
 epoch
  6, quorum 0,1,2 osd-001,osd-002,osd-003
   osdmap e20: 6 osds: 6 up, 6 in
pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
  199 MB used, 18166 MB / 18365 MB avail
64 active+undersized+degraded
 
  I have tried to increase the pg_num and pgp_num to 512, as advised here,
  but Ceph refused to do that, with the following error:
  Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs on
 ~6
  OSDs exceeds per-OSD max of 32)
 
  After changing the pg*_num to 256, as advised here, the warning was
  changed to:
  health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
  undersized
 
  What is the issue behind these warning? and what do I need to do to fix
 it?

 It's basically telling you that you current available OSD's don't meet
 the requirements to suit the number of replica's you have requested.

 What replica size have you configured for that pool?

 
  I'm a newcomer in the Ceph world, so please don't shoot me if this
 issue has
  been answered / discussed countless times before :) I have searched the
  web and the mailing list for the answers, but I couldn't find a valid
 solution.
  Any help is highly appreciated. Thank you!
  Regards,
  Bogdan






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Hello, Sahana!

The output of the requested commands is listed below:

admin@cp-admin:~/safedrive$ ceph osd dump
epoch 26
fsid 7db3cf23-ddcb-40d9-874b-d7434bd8463d
created 2015-03-20 07:53:37.948969
modified 2015-03-20 08:11:18.813790
flags
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 26 flags hashpspool
stripe_width 0
max_osd 6
osd.0 up   in  weight 1 up_from 4 up_thru 24 down_at 0 last_clean_interval
[0,0) 192.168.122.21:6800/10437 192.168.122.21:6801/10437
192.168.122.21:6802/10437 192.168.122.21:6803/10437 exists,up
c6f241e1-2e98-4fb5-b376-27bade093428
osd.1 up   in  weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.21:6805/11079 192.168.122.21:6806/11079
192.168.122.21:6807/11079 192.168.122.21:6808/11079 exists,up
a4f2aeea-4e45-4d5f-ab9e-dff8295fb5ea
osd.2 up   in  weight 1 up_from 11 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.22:6800/9375 192.168.122.22:6801/9375
192.168.122.22:6802/9375 192.168.122.22:6803/9375 exists,up
f879ef15-7c9a-41a8-88a6-cde013dc2d07
osd.3 up   in  weight 1 up_from 14 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.22:6805/10008 192.168.122.22:6806/10008
192.168.122.22:6807/10008 192.168.122.22:6808/10008 exists,up
99b3ff05-78b9-4f9f-a8f1-dbead9baddc6
osd.4 up   in  weight 1 up_from 17 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.23:6800/9158 192.168.122.23:6801/9158
192.168.122.23:6802/9158 192.168.122.23:6803/9158 exists,up
9217fcdd-201b-47c1-badf-b352a639d122
osd.5 up   in  weight 1 up_from 20 up_thru 0 down_at 0 last_clean_interval
[0,0) 192.168.122.23:6805/9835 192.168.122.23:6806/9835
192.168.122.23:6807/9835 192.168.122.23:6808/9835 exists,up
ec2c4764-5e30-431b-bc3e-755a7614b90d

admin@cp-admin:~/safedrive$ ceph osd tree
# idweighttype nameup/downreweight
-10root default
-20host osd-001
00osd.0up1
10osd.1up1
-30host osd-002
20osd.2up1
30osd.3up1
-40host osd-003
40osd.4up1
50osd.5up1

Please let me know if there's anything else I can / should do.

Thank you very much!

Regards,
Bogdan


On Fri, Mar 20, 2015 at 9:17 AM, Sahana shna...@gmail.com wrote:

 HI Bogdan,


 Please paste the output of `ceph osd dump` and ceph osd tree`

 Thanks
 Sahana

 On Fri, Mar 20, 2015 at 11:47 AM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Hello, Nick!

 Thank you for your reply! I have tested both with setting the replicas
 number to 2 and 3, by setting the 'osd pool default size = (2|3)' in the
 .conf file. Either I'm doing something incorrectly, or they seem to produce
 the same result.

 Can you give any troubleshooting advice? I have purged and re-created the
 cluster several times, but the result is the same.

 Thank you for your help!

 Regards,
 Bogdan


 On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk n...@fisk.me.uk wrote:





  -Original Message-
  From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
 Of
  Bogdan SOLGA
  Sent: 19 March 2015 20:51
  To: ceph-users@lists.ceph.com
  Subject: [ceph-users] PGs issue
 
  Hello, everyone!
  I have created a Ceph cluster (v0.87.1-1) using the info from the
 'Quick
  deploy' page, with the following setup:
  • 1 x admin / deploy node;
  • 3 x OSD and MON nodes;
  o each OSD node has 2 x 8 GB HDDs;

  The setup was made using Virtual Box images, on Ubuntu 14.04.2.
  After performing all the steps, the 'ceph health' output lists the
 cluster in the
  HEALTH_WARN state, with the following details:
  HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
  unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
 osd (10
   min 20)
  The output of 'ceph -s':
  cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
   health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
  stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs
 per
  osd (10  min 20)
   monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
  002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election
 epoch
  6, quorum 0,1,2 osd-001,osd-002,osd-003
   osdmap e20: 6 osds: 6 up, 6 in
pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
  199 MB used, 18166 MB / 18365 MB avail
64 active+undersized+degraded
 
  I have tried to increase the pg_num and pgp_num to 512, as advised
 here,
  but Ceph refused to do that, with the following error:
  Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs
 on ~6
  OSDs exceeds per-OSD max of 32)
 
  After changing the pg*_num to 256, as advised here, the warning was
  changed to:
  health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
  undersized
 
  What is the issue behind these warning? and what do I need to do to
 fix it?

 It's basically 

Re: [ceph-users] PGs issue

2015-03-20 Thread Sahana
Hi Bogdan,

 Here is the link for hardware recccomendations :
http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives.
As per this link, minimum  size  reccommended for osds  is 1TB.
 Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of 0.01
Here is the snippet from crushmaps section of ceph docs:

Weighting Bucket Items

Ceph expresses bucket weights as doubles, which allows for fine weighting.
A weight is the relative difference between device capacities. We recommend
using 1.00 as the relative weight for a 1TB storage device. In such a
scenario, a weight of 0.5 would represent approximately 500GB, and a weight
of 3.00 would represent approximately 3TB. Higher level buckets have a
weight that is the sum total of the leaf items aggregated by the bucket.

Thanks

Sahana

On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA bogdan.so...@gmail.com
wrote:

 Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
 status has changed to '256 active+clean'.

 Is this information clearly stated in the documentation, and I have missed
 it? In case it isn't - I think it would be recommended to add it, as the
 issue might be encountered by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk n...@fisk.me.uk wrote:

 I see the Problem, as your OSD's are only 8GB they have a zero weight, I
 think the minimum size you can get away with is 10GB in Ceph as the size is
 measured in TB and only has 2 decimal places.

 For a work around try running :-

 ceph osd crush reweight osd.X 1

 for each osd, this will reweight the OSD's. Assuming this is a test
 cluster and you won't be adding any larger OSD's in the future this
 shouldn't cause any problems.

 
  admin@cp-admin:~/safedrive$ ceph osd tree
  # idweighttype nameup/downreweight
  -10root default
  -20host osd-001
  00osd.0up1
  10osd.1up1
  -30host osd-002
  20osd.2up1
  30osd.3up1
  -40host osd-003
  40osd.4up1
  50osd.5up1






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
status has changed to '256 active+clean'.

Is this information clearly stated in the documentation, and I have missed
it? In case it isn't - I think it would be recommended to add it, as the
issue might be encountered by other users, as well.

Kind regards,
Bogdan


On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk n...@fisk.me.uk wrote:

 I see the Problem, as your OSD's are only 8GB they have a zero weight, I
 think the minimum size you can get away with is 10GB in Ceph as the size is
 measured in TB and only has 2 decimal places.

 For a work around try running :-

 ceph osd crush reweight osd.X 1

 for each osd, this will reweight the OSD's. Assuming this is a test
 cluster and you won't be adding any larger OSD's in the future this
 shouldn't cause any problems.

 
  admin@cp-admin:~/safedrive$ ceph osd tree
  # idweighttype nameup/downreweight
  -10root default
  -20host osd-001
  00osd.0up1
  10osd.1up1
  -30host osd-002
  20osd.2up1
  30osd.3up1
  -40host osd-003
  40osd.4up1
  50osd.5up1





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Thank you for the clarifications, Sahana!

I haven't got to that part, yet, so these details were (yet) unknown to me.
Perhaps some information on the PGs weight should be provided in the 'quick
deployment' page, as this issue might be encountered in the future by other
users, as well.

Kind regards,
Bogdan


On Fri, Mar 20, 2015 at 12:05 PM, Sahana shna...@gmail.com wrote:

 Hi Bogdan,

  Here is the link for hardware recccomendations :
 http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives.
 As per this link, minimum  size  reccommended for osds  is 1TB.
  Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of 0.01
 Here is the snippet from crushmaps section of ceph docs:

 Weighting Bucket Items

 Ceph expresses bucket weights as doubles, which allows for fine
 weighting. A weight is the relative difference between device capacities.
 We recommend using 1.00 as the relative weight for a 1TB storage device.
 In such a scenario, a weight of 0.5 would represent approximately 500GB,
 and a weight of 3.00 would represent approximately 3TB. Higher level
 buckets have a weight that is the sum total of the leaf items aggregated by
 the bucket.

 Thanks

 Sahana

 On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
 status has changed to '256 active+clean'.

 Is this information clearly stated in the documentation, and I have
 missed it? In case it isn't - I think it would be recommended to add it, as
 the issue might be encountered by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk n...@fisk.me.uk wrote:

 I see the Problem, as your OSD's are only 8GB they have a zero weight, I
 think the minimum size you can get away with is 10GB in Ceph as the size is
 measured in TB and only has 2 decimal places.

 For a work around try running :-

 ceph osd crush reweight osd.X 1

 for each osd, this will reweight the OSD's. Assuming this is a test
 cluster and you won't be adding any larger OSD's in the future this
 shouldn't cause any problems.

 
  admin@cp-admin:~/safedrive$ ceph osd tree
  # idweighttype nameup/downreweight
  -10root default
  -20host osd-001
  00osd.0up1
  10osd.1up1
  -30host osd-002
  20osd.2up1
  30osd.3up1
  -40host osd-003
  40osd.4up1
  50osd.5up1






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Craig Lewis
This seems to be a fairly consistent problem for new users.

 The create-or-move is adjusting the crush weight, not the osd weight.
Perhaps the init script should set the defaultweight to 0.01 if it's = 0?

It seems like there's a downside to this, but I don't see it.




On Fri, Mar 20, 2015 at 1:25 PM, Robert LeBlanc rob...@leblancnet.us
wrote:

 The weight can be based on anything, size, speed, capability, some random
 value, etc. The important thing is that it makes sense to you and that you
 are consistent.

 Ceph by default (ceph-disk and I believe ceph-deploy) take the approach of
 using size. So if you use a different weighting scheme, you should manually
 add the OSDs, or clean up after using ceph-disk/ceph-deploy. Size works
 well for most people, unless the disks are less than 10 GB so most people
 don't bother messing with it.

 On Fri, Mar 20, 2015 at 12:06 PM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Thank you for the clarifications, Sahana!

 I haven't got to that part, yet, so these details were (yet) unknown to
 me. Perhaps some information on the PGs weight should be provided in the
 'quick deployment' page, as this issue might be encountered in the future
 by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 12:05 PM, Sahana shna...@gmail.com wrote:

 Hi Bogdan,

  Here is the link for hardware recccomendations :
 http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives.
 As per this link, minimum  size  reccommended for osds  is 1TB.
  Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of
 0.01
 Here is the snippet from crushmaps section of ceph docs:

 Weighting Bucket Items

 Ceph expresses bucket weights as doubles, which allows for fine
 weighting. A weight is the relative difference between device capacities.
 We recommend using 1.00 as the relative weight for a 1TB storage
 device. In such a scenario, a weight of 0.5 would represent
 approximately 500GB, and a weight of 3.00 would represent approximately
 3TB. Higher level buckets have a weight that is the sum total of the leaf
 items aggregated by the bucket.

 Thanks

 Sahana

 On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Thank you for your suggestion, Nick! I have re-weighted the OSDs and
 the status has changed to '256 active+clean'.

 Is this information clearly stated in the documentation, and I have
 missed it? In case it isn't - I think it would be recommended to add it, as
 the issue might be encountered by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk n...@fisk.me.uk wrote:

 I see the Problem, as your OSD's are only 8GB they have a zero weight,
 I think the minimum size you can get away with is 10GB in Ceph as the size
 is measured in TB and only has 2 decimal places.

 For a work around try running :-

 ceph osd crush reweight osd.X 1

 for each osd, this will reweight the OSD's. Assuming this is a test
 cluster and you won't be adding any larger OSD's in the future this
 shouldn't cause any problems.

 
  admin@cp-admin:~/safedrive$ ceph osd tree
  # idweighttype nameup/downreweight
  -10root default
  -20host osd-001
  00osd.0up1
  10osd.1up1
  -30host osd-002
  20osd.2up1
  30osd.3up1
  -40host osd-003
  40osd.4up1
  50osd.5up1






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Robert LeBlanc
The weight can be based on anything, size, speed, capability, some random
value, etc. The important thing is that it makes sense to you and that you
are consistent.

Ceph by default (ceph-disk and I believe ceph-deploy) take the approach of
using size. So if you use a different weighting scheme, you should manually
add the OSDs, or clean up after using ceph-disk/ceph-deploy. Size works
well for most people, unless the disks are less than 10 GB so most people
don't bother messing with it.

On Fri, Mar 20, 2015 at 12:06 PM, Bogdan SOLGA bogdan.so...@gmail.com
wrote:

 Thank you for the clarifications, Sahana!

 I haven't got to that part, yet, so these details were (yet) unknown to
 me. Perhaps some information on the PGs weight should be provided in the
 'quick deployment' page, as this issue might be encountered in the future
 by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 12:05 PM, Sahana shna...@gmail.com wrote:

 Hi Bogdan,

  Here is the link for hardware recccomendations :
 http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives.
 As per this link, minimum  size  reccommended for osds  is 1TB.
  Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of 0.01
 Here is the snippet from crushmaps section of ceph docs:

 Weighting Bucket Items

 Ceph expresses bucket weights as doubles, which allows for fine
 weighting. A weight is the relative difference between device capacities.
 We recommend using 1.00 as the relative weight for a 1TB storage device.
 In such a scenario, a weight of 0.5 would represent approximately 500GB,
 and a weight of 3.00 would represent approximately 3TB. Higher level
 buckets have a weight that is the sum total of the leaf items aggregated by
 the bucket.

 Thanks

 Sahana

 On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
 status has changed to '256 active+clean'.

 Is this information clearly stated in the documentation, and I have
 missed it? In case it isn't - I think it would be recommended to add it, as
 the issue might be encountered by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk n...@fisk.me.uk wrote:

 I see the Problem, as your OSD's are only 8GB they have a zero weight,
 I think the minimum size you can get away with is 10GB in Ceph as the size
 is measured in TB and only has 2 decimal places.

 For a work around try running :-

 ceph osd crush reweight osd.X 1

 for each osd, this will reweight the OSD's. Assuming this is a test
 cluster and you won't be adding any larger OSD's in the future this
 shouldn't cause any problems.

 
  admin@cp-admin:~/safedrive$ ceph osd tree
  # idweighttype nameup/downreweight
  -10root default
  -20host osd-001
  00osd.0up1
  10osd.1up1
  -30host osd-002
  20osd.2up1
  30osd.3up1
  -40host osd-003
  40osd.4up1
  50osd.5up1






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Robert LeBlanc
I like this idea. I was under the impression that udev did not call the
init script, but ceph-disk directly. I don't see ceph-disk calling
create-or-move, but I know it does because I see it in the ceph -w when I
boot up OSDs.

/lib/udev/rules.d/95-ceph-osd.rules
# activate ceph-tagged partitions
ACTION==add, SUBSYSTEM==block, \
  ENV{DEVTYPE}==partition, \
  ENV{ID_PART_ENTRY_TYPE}==4fbd7e29-9d25-41b8-afd0-062c0ceff05d, \
  RUN+=/usr/sbin/ceph-disk-activate /dev/$name


On Fri, Mar 20, 2015 at 2:36 PM, Craig Lewis cle...@centraldesktop.com
wrote:

 This seems to be a fairly consistent problem for new users.

  The create-or-move is adjusting the crush weight, not the osd weight.
 Perhaps the init script should set the defaultweight to 0.01 if it's = 0?

 It seems like there's a downside to this, but I don't see it.




 On Fri, Mar 20, 2015 at 1:25 PM, Robert LeBlanc rob...@leblancnet.us
 wrote:

 The weight can be based on anything, size, speed, capability, some random
 value, etc. The important thing is that it makes sense to you and that you
 are consistent.

 Ceph by default (ceph-disk and I believe ceph-deploy) take the approach
 of using size. So if you use a different weighting scheme, you should
 manually add the OSDs, or clean up after using ceph-disk/ceph-deploy.
 Size works well for most people, unless the disks are less than 10 GB so
 most people don't bother messing with it.

 On Fri, Mar 20, 2015 at 12:06 PM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Thank you for the clarifications, Sahana!

 I haven't got to that part, yet, so these details were (yet) unknown to
 me. Perhaps some information on the PGs weight should be provided in the
 'quick deployment' page, as this issue might be encountered in the future
 by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 12:05 PM, Sahana shna...@gmail.com wrote:

 Hi Bogdan,

  Here is the link for hardware recccomendations :
 http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives.
 As per this link, minimum  size  reccommended for osds  is 1TB.
  Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of
 0.01
 Here is the snippet from crushmaps section of ceph docs:

 Weighting Bucket Items

 Ceph expresses bucket weights as doubles, which allows for fine
 weighting. A weight is the relative difference between device capacities.
 We recommend using 1.00 as the relative weight for a 1TB storage
 device. In such a scenario, a weight of 0.5 would represent
 approximately 500GB, and a weight of 3.00 would represent
 approximately 3TB. Higher level buckets have a weight that is the sum total
 of the leaf items aggregated by the bucket.

 Thanks

 Sahana

 On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA bogdan.so...@gmail.com
 wrote:

 Thank you for your suggestion, Nick! I have re-weighted the OSDs and
 the status has changed to '256 active+clean'.

 Is this information clearly stated in the documentation, and I have
 missed it? In case it isn't - I think it would be recommended to add it, 
 as
 the issue might be encountered by other users, as well.

 Kind regards,
 Bogdan


 On Fri, Mar 20, 2015 at 10:33 AM, Nick Fisk n...@fisk.me.uk wrote:

 I see the Problem, as your OSD's are only 8GB they have a zero
 weight, I think the minimum size you can get away with is 10GB in Ceph as
 the size is measured in TB and only has 2 decimal places.

 For a work around try running :-

 ceph osd crush reweight osd.X 1

 for each osd, this will reweight the OSD's. Assuming this is a test
 cluster and you won't be adding any larger OSD's in the future this
 shouldn't cause any problems.

 
  admin@cp-admin:~/safedrive$ ceph osd tree
  # idweighttype nameup/downreweight
  -10root default
  -20host osd-001
  00osd.0up1
  10osd.1up1
  -30host osd-002
  20osd.2up1
  30osd.3up1
  -40host osd-003
  40osd.4up1
  50osd.5up1






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-20 Thread Nick Fisk
I see the Problem, as your OSD's are only 8GB they have a zero weight, I think 
the minimum size you can get away with is 10GB in Ceph as the size is measured 
in TB and only has 2 decimal places.

For a work around try running :-

ceph osd crush reweight osd.X 1 

for each osd, this will reweight the OSD's. Assuming this is a test cluster and 
you won't be adding any larger OSD's in the future this shouldn't cause any 
problems.

 
 admin@cp-admin:~/safedrive$ ceph osd tree
 # idweighttype nameup/downreweight
 -10root default
 -20host osd-001
 00osd.0up1
 10osd.1up1
 -30host osd-002
 20osd.2up1
 30osd.3up1
 -40host osd-003
 40osd.4up1
 50osd.5up1




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PGs issue

2015-03-19 Thread Nick Fisk




 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Bogdan SOLGA
 Sent: 19 March 2015 20:51
 To: ceph-users@lists.ceph.com
 Subject: [ceph-users] PGs issue
 
 Hello, everyone!
 I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
 deploy' page, with the following setup:
 • 1 x admin / deploy node;
 • 3 x OSD and MON nodes;
 o each OSD node has 2 x 8 GB HDDs;
 The setup was made using Virtual Box images, on Ubuntu 14.04.2.
 After performing all the steps, the 'ceph health' output lists the cluster in 
 the
 HEALTH_WARN state, with the following details:
 HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck
 unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per osd (10
  min 20)
 The output of 'ceph -s':
 cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
  health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
 stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
 osd (10  min 20)
  monmap e1: 3 mons at {osd-003=192.168.122.23:6789/0,osd-
 002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0}, election epoch
 6, quorum 0,1,2 osd-001,osd-002,osd-003
  osdmap e20: 6 osds: 6 up, 6 in
   pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
 199 MB used, 18166 MB / 18365 MB avail
   64 active+undersized+degraded
 
 I have tried to increase the pg_num and pgp_num to 512, as advised here,
 but Ceph refused to do that, with the following error:
 Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs on ~6
 OSDs exceeds per-OSD max of 32)
 
 After changing the pg*_num to 256, as advised here, the warning was
 changed to:
 health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
 undersized
 
 What is the issue behind these warning? and what do I need to do to fix it?

It's basically telling you that you current available OSD's don't meet the 
requirements to suit the number of replica's you have requested.

What replica size have you configured for that pool?

 
 I'm a newcomer in the Ceph world, so please don't shoot me if this issue has
 been answered / discussed countless times before :) I have searched the
 web and the mailing list for the answers, but I couldn't find a valid 
 solution.
 Any help is highly appreciated. Thank you!
 Regards,
 Bogdan




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com