On 18/04/13 20:48, Alex Elsayed wrote:
> Hugo Mills wrote:
> 
>> On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
>>> Dear Devs,
> <snip>
>>> Note that esata shows just the disks as individual physical disks, 4 per
>>> disk pack. Can physical disks be grouped together to force the RAID data
>>> to be mirrored across all the nominated groups?
>>
>>    Interesting you should ask this: I realised quite recently that
>> this could probably be done fairly easily with a modification to the
>> chunk allocator.
> <snip>
> 
> One thing that might be an interesting approach:
> 
> Ceph is already in mainline, and uses CRUSH in a similar way to what's 
> described (topology-aware placement+replication). Ceph does it by OSD nodes 
> rather than disk, and the units are objects rather than chunks, but it could 
> potentially be a rather good fit.
> 
> CRUSH does it by describing a topology hierarchy, and allocating the OSD ids 
> to that hierarchy. It then uses that to map from a key to one-or-more 
> locations. If we use chunk ID as the key, and use UUID_SUB in place of the 
> OSD id, it could do the job.

OK... That was a bit of a crash course (ok, sorry for the pun on crush :-) )

http://www.anchor.com.au/blog/2012/09/a-crash-course-in-ceph/


Interesting that the "CRUSH map is written by hand, then compiled and
passed to the cluster".

Hence, looks like simply have the sysadmin specify what gets grouped
into what group. (I certainly know what disk is where and where I want
the data mirrored!)


For my example, the disk packs are plugged into two servers (up to four
at a time at present) so that we have some fail-over if one server dies.
Ceph looks to be a little overkill for just two big storage users.

Or perhaps include the same Ceph code routines into btrfs?...


Regards,
Martin

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to