Hi Brian,

On Fri, 24 Oct 2008, Brian Koebbe wrote:
> I have a small test cluster (using git "unstable" and created using the
> WIKI as a guide) currently using 5 OSDs.  They seem to be working just
> fine.  I'd like to grow that number of OSDs to 10, then 20, then maybe
> 50.  
> 
> Currently, using something like "cmonctl -m ip:port osd setmaxosd 10"
> seems to crash, and permanently break, cmon.  
> 
> Right now, is there a good way to grow the number of OSDs? Should I
> expect the above cmonctl command to work at the moment in "unstable"?
> Should I be using 0.4 instead?

It's supposed to work... I just fixed a small bug and pushed the fix out 
to unstable, and that part works now.

The next step to growing the cluster is to expand the CRUSH map to 
actually allocate data to those OSDs.  That part is a bit more involved, 
as it should involve some planning as far as if and how you'll be breaking 
up the OSD cluster into failure domains, and what kind of future growth 
you expect.  

1 - Grab and decompile the current CRUSH map with

 ./cmonctl osd getcrushmap -o cm
 ./crushtool -d cm > cm.txt

2 - Edit cm.txt to your liking

3 - Profit!
 ./crushtool -c cm.txt -o cm
 ./cmonctl osd setcrushmap -i cm

As always, step 2 is the tricky part.  The easiest thing is to make the 
single bucket use 'alg straw' and stick all of the devices in there, as 
straw buckets give you the least data movement when adding/removing 
devices.  The downside is that it's a slower mapping calculation, but for 
smallish clusters (<= 50 osds) that shouldn't be a problem.

sage

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to