I believe there is a use case behind the lvm journals.
For instance you can do:

2 SSDs:

* tiny mdadm raid 1 setup for the system; let’s say /dev/sda1 and /dev/sdb1
* then you still have:
        * /dev/sda2
        * /dev/sdb2

They can both host journals, and you usually want to manage them with lvm, this 
is easier than managing partition.
I just open an issue as a feature/enhancement request.

https://github.com/ceph/ceph-ansible/issues/9

This shouldn’t be that difficult to implement.

–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: [email protected] 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 06 Mar 2014, at 14:28, David McBride <[email protected]> wrote:

> On 06/03/14 13:19, Gandalf Corvotempesta wrote:
>> 2014-03-06 13:07 GMT+01:00 David McBride <[email protected]>:
>>> This causes the IO load to be nicely balanced across the two SSDs,
>>> removing any hot spots, at the cost of enlarging the failure domain of
>>> the loss of an SSD from half a node to a full node.
>> 
>> This is not a solution for me.
>> Why not using LVM with a VG striped across both SSD ?
>> I've never used LVM without raid, what happens in case of failure
>> of a phisical disks?  The whole VG is lost ?
> 
> Yes.  A stripe-set depends on all of the members of an array, whether
> managed through MD or LVM.
> 
> Thus, in a machine with two SSDs, which are striped together, the loss
> of *either* SSD will cause all of the OSDs hosted by that machine to be
> lost.
> 
> (Note: if you want to use LVM rather than GPTs on MD, you will probably
> need to remove the '|dm-*' clause from the Ceph udev rules that govern
> OSD assembly before they will work as expected.)
> 
> Kind regards,
> David
> -- 
> David McBride <[email protected]>
> Unix Specialist, University Computing Service

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to