I am curious what the kernel parts of ceph do? What the user parts do?  Do we 
have a web page describing this in detail?

>From what you described, in the librbd case, user parts do not need the kernel 
>parts at all, right? This sounds very good to me.

Send from my iOS device.

On Jan 20, 2013, at 2:16 AM, Sage Weil <[email protected]> wrote:

> On Sat, 19 Jan 2013, Jeff Mitchell wrote:
>> Sage Weil wrote:
>>> On Sun, 20 Jan 2013, Peter Smith wrote:
>>>> Thanks for the reply, Sage and everyone.
>>>> 
>>>> Sage, so I can expect Ceph-rbd works well on Centos 6.3 if I only use
>>>> it as the Cinder volume backend because the librbd in QEMU doesn't
>>>> make use of kernel client, right?
>>> 
>>> Then the dependency is on the qemu version.  I don't remember that off the
>>> top of my head, or know what version rhel6 ships.  Most people deploying
>>> openstack and rbd are using a more modern distro (like ubuntu 12.04).
>> 
>> This discussion has made me curious: I'm using Ganeti to manage VMs, which
>> manages the storage using the kernel client and passes in the dev device to
>> qemu.
>> 
>> Can you comment on any known performance differences between the two methods
>> -- native qemu+librbd creating a block device vs. the kernel client creating 
>> a
>> block device?
> 
> librbd is faster-paced and has more features, including client-side 
> caching (analogous to the cache in a hard drive), discard, and support for 
> image cloning.  It tends to perform better.
> 
> The kernel client can be combined with FlashCache or something similar, 
> although that isn't something we've tested.
> 
> We generally recommend the KVM+librbd route, as it is easier to manage the 
> dependencies, and is well integrated with libvirt.  FWIW this is what 
> OpenStack and CloudStack normally use.
> 
> sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to