Here is another full ratio scenario:

Let's say that the cluster map is configured as follows:

          Row
             |
    --------------
    |               |
Rack1      Rack2
    |               |
Host1       Host4
Host2       Host5
Host3       Host6

...with a ruleset that distributes replicas across the two rack buckets (i.e. 
using Rack1 and Rack2 as failure domains)

If the used capacity is more that 2/3rds full but less than 5/6 full and one of 
the hosts fail, will Ceph rebalance using all capacity available in both Rack 
buckets and continue running, or only rebalance in one rack bucket, resulting 
in exceeding the full ratio and locking up?

Thanks,

-Tom

-----Original Message-----
From: Gregory Farnum [mailto:[email protected]] 
Sent: Tuesday, March 04, 2014 10:10 AM
To: Barnes, Thomas J
Cc: [email protected]
Subject: Re: [ceph-users] "full ratio" - how does this work with multiple pools 
on seprate OSDs?

The setting is calculated per-OSD, and if any OSD hits the hard limit the whole 
cluster transitions to the full state and stops accepting writes until the 
situation is resolved.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Mar 4, 2014 at 9:58 AM, Barnes, Thomas J <[email protected]> 
wrote:
> I have a question about how "full ratio" works.
>
>
>
> How does a single "full ratio" setting work when the cluster has pools 
> associated with different drives?
>
>
>
> For example, let's say I have a cluster comprised of fifty 10K RPM 
> drives and fifty 7200 RPM drives.  I segregate the 10K drives and 
> 7200RM drives under separate buckets, create separate rulesets for 
> each bucket, and create separate pools for each bucket (using each buckets 
> respective ruleset).
>
>
>
> What happens if one of the pools fills to capacity while the other 
> pool remains empty?
>
> How does the cluster respond when the OSDs in one pool become full 
> while the OSDs in other pools do not?
>
> Is full ratio calculated over the entire cluster or "by pool"?
>
>
>
> Thanks,
>
>
>
> -Tom
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to