On 24 Nov 2012, at 16:42, Gregory Farnum wrote:

> On Thursday, November 22, 2012 at 4:33 AM, Jimmy Tang wrote:
>> Hi All,
>> 
>> Is it possible at this point in time to setup some form of tiering of 
>> storage pools in ceph by modifying the crush map? For example I want to have 
>> my most recently used data on a small set of nodes that have SSD's and over 
>> time migrate data from the SSD's to some bulk spinning disk using a LRU 
>> policy?
> There's no way to have Ceph do this automatically at this time. Tiering in 
> this fashion traditionally requires the sort of centralized metadata that 
> Ceph and RADOS are designed to avoid, and while interest in it is heating up 
> we haven't yet come up with a new solution. ;)
> 

that makes sense that tiering in this fashion makes it un-ceph like.

> If your system allows you to do this manually, though — yes. You can create 
> multiple (non-overlapping, presumably) trees within your CRUSH map, one of 
> which would be an "SSD" storage group and one of which would be a "normal" 
> storage group. Then create a CRUSH rule which draws from the SSD group and a 
> rule which draws from the normal group, create a pool using each of those, 
> and write to whichever one at the appropriate time.
> Alternatively, you could also place all the primaries on SSD storage but the 
> replicas on regular drives — this won't speed up your writes much but will 
> mean SSD-speed reads. :)
> -Greg

Ok, so it's possible to set pools of disks/nodes/racks as primary copies of 
data from which the client can read data from?

Jimmy--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to