A while ago, I managed to have this working but this was really tricky.
See my comment here: 
https://github.com/ceph/ceph-ansible/issues/9#issuecomment-37127128

One use case I had was a system with 2 SSD for the OS and a couple of OSDs.
Both SSD were in RAID1 and the system was configured with lvm already.
So we had to create LVs for each journals.

> On 24 Feb 2015, at 14:41, Jörg Henne <henn...@gmail.com> wrote:
> 
> 2015-02-24 14:05 GMT+01:00 John Spray <john.sp...@redhat.com>:
> 
> I imagine that without proper partition labels you'll also not get the 
> benefit of e.g. the udev magic
> that allows plugging OSDs in/out of different hosts.  More generally you'll 
> just be in a rather non standard configuration that will confuse anyone 
> working on the host.
> Ok, thanks for the heads up!
>  
> Can I ask why you want to use LVM?  It is not generally necessary or useful 
> with Ceph: Ceph expects to be fed raw drives.
> I am currently just experimenting with ceph. Although I have a reasonable 
> number of "lab" nodes, those nodes are shared with other experimentation and 
> thus it would be rather inconvenient to dedicate the raw disks exclusively to 
> ceph.
> 
> Joerg Henne
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
–––– 
Sébastien Han 
Cloud Architect 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to