What I am getting at is that instead of sinking a bunch of time into this
bandaid, why not sink that time into a hypervisor migration. Seems well
timed if you ask me.

There are even tools to make that migration easier

http://libguestfs.org/virt-v2v.1.html

You should ultimately move your hypervisor instead of building a one off
case for ceph. Ceph works really well if you stay inside the box. So does
KVM. They work like Gang Buster's together.

I know that doesn't really answer your OP, but this is what I would do.

~D

On Sat, Dec 9, 2017 at 7:56 PM Brady Deetz <[email protected]> wrote:

> We have over 150 VMs running in vmware. We also have 2PB of Ceph for
> filesystem. With our vmware storage aging and not providing the IOPs we
> need, we are considering and hoping to use ceph. Ultimately, yes we will
> move to KVM, but in the short term, we probably need to stay on VMware.
> On Dec 9, 2017 6:26 PM, "Donny Davis" <[email protected]> wrote:
>
>> Just curious but why not just use a hypervisor with rbd support? Are
>> there VMware specific features you are reliant on?
>>
>> On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz <[email protected]> wrote:
>>
>>> I'm testing using RBD as VMWare datastores. I'm currently testing with
>>> krbd+LVM on a tgt target hosted on a hypervisor.
>>>
>>> My Ceph cluster is HDD backed.
>>>
>>> In order to help with write latency, I added an SSD drive to my
>>> hypervisor and made it a writeback cache for the rbd via LVM. So far I've
>>> managed to smooth out my 4k write latency and have some pleasing results.
>>>
>>> Architecturally, my current plan is to deploy an iSCSI gateway on each
>>> hypervisor hosting that hypervisor's own datastore.
>>>
>>> Does anybody have any experience with this kind of configuration,
>>> especially with regard to LVM writeback caching combined with RBD?
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to