Re: [ceph-users] RBD+LVM -> iSCSI -> VMWare

2017-12-10 Thread Heðin Ejdesgaard Møller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Another option is to utilize the iscsi gateway, provided in 12.2 
http://docs.ceph.com/docs/master/rbd/iscsi-overview/

Benefits:
You can EOL your old SAN wtihout having to simultaneously migrate to another 
hypervisor.
Any infrastructure that ties in to vSphere, is unaffected. (CEPH is just 
another set of datastores.)
If you have the appropriate vmware licenses etc. then your move to CEPH can be 
done without any downtime.

Drawback from my tests, using ceph-12.2-latest and ESXi-6.5, is that you get 
around 30% performance penalty, and the
latency is higher, compared to a direct rbd mount.


On ley, 2017-12-09 at 19:17 -0600, Brady Deetz wrote:
> That's not a bad position. I have concerns with what I'm proposing, so a 
> hypervisor migration may actually bring less
> risk than a storage abomination. 
> 
> On Dec 9, 2017 7:09 PM, "Donny Davis"  wrote:
> > What I am getting at is that instead of sinking a bunch of time into this 
> > bandaid, why not sink that time into a
> > hypervisor migration. Seems well timed if you ask me.
> > 
> > There are even tools to make that migration easier
> > 
> > http://libguestfs.org/virt-v2v.1.html
> > 
> > You should ultimately move your hypervisor instead of building a one off 
> > case for ceph. Ceph works really well if
> > you stay inside the box. So does KVM. They work like Gang Buster's together.
> > 
> > I know that doesn't really answer your OP, but this is what I would do.
> > 
> > ~D
> > 
> > On Sat, Dec 9, 2017 at 7:56 PM Brady Deetz  wrote:
> > > We have over 150 VMs running in vmware. We also have 2PB of Ceph for 
> > > filesystem. With our vmware storage aging and
> > > not providing the IOPs we need, we are considering and hoping to use 
> > > ceph. Ultimately, yes we will move to KVM,
> > > but in the short term, we probably need to stay on VMware. 
> > > On Dec 9, 2017 6:26 PM, "Donny Davis"  wrote:
> > > > Just curious but why not just use a hypervisor with rbd support? Are 
> > > > there VMware specific features you are
> > > > reliant on? 
> > > > 
> > > > On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz  wrote:
> > > > > I'm testing using RBD as VMWare datastores. I'm currently testing 
> > > > > with krbd+LVM on a tgt target hosted on a
> > > > > hypervisor.
> > > > > 
> > > > > My Ceph cluster is HDD backed.
> > > > > 
> > > > > In order to help with write latency, I added an SSD drive to my 
> > > > > hypervisor and made it a writeback cache for
> > > > > the rbd via LVM. So far I've managed to smooth out my 4k write 
> > > > > latency and have some pleasing results.
> > > > > 
> > > > > Architecturally, my current plan is to deploy an iSCSI gateway on 
> > > > > each hypervisor hosting that hypervisor's
> > > > > own datastore.
> > > > > 
> > > > > Does anybody have any experience with this kind of configuration, 
> > > > > especially with regard to LVM writeback
> > > > > caching combined with RBD?
> > > > > ___
> > > > > ceph-users mailing list
> > > > > ceph-users@lists.ceph.com
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > > 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEElZWfRQVsNukQFi9Ko80MCbT/An0FAlotcRYACgkQo80MCbT/
An36fQ//ULP6gwd4qUbXG3yKBHqMtcsTV76+CfP8e3jcuEqyEzlCugoR10DXPELj
TLCnrBp4fDP5gTd1zIHcU+PMPcVJ91dBYUWoMZrSLAraM0+7kvNQ9Nsacsl6CsiZ
yq+506uOhwcLub55oLSpKgnaW1rEG6TAG/6TNIBGakb2a79iC1xev16S3lJ8V7zI
cb3psUCePv/T753q/0E9B5SH9L5BiygsMT4DjiE09xGcFzH3lqkMWm2HMCFXNogI
WbwqQVTTgk5Ch3oilz6cpOIqLK2VMkK0PPFXSGi1SAEjkw2c/XIBykB9MclVQn+8
q5kO5g+uFcflEVnFhKTZknXVoOjrybhs4lMYmK4LJJ340Ay1uLyAlFdZdh+xAN3B
43QBKfcd1dL+EgKkMVuzGOaYOAqrFbh2/DN5rAz3l1YUy5h3OtjrXlNU/F7AkZfc
+UECf9wa6M7uS6DqaPMVxtLhROyMnHw+Z6jrKz7V8EamUduxQyNwOxBNIJYDmKVC
SHSkQi+oykPHWcOIXr1BNR2raaH1YVqXG+6mK8b6YV6sGtVeXA+KCa8RgrtabU3F
tgDW8cPkeTcPYi5BOVZeQ2OSD90A6eiC4fJbMcWVbUQim+0gSY2paoC8Rk/HQkMF
ug8xc9Os7SXe/wEOGQAzRHjDi16eKC9JghrS7dH4JLPg4gvBn4E=
=auLW
-END PGP SIGNATURE-

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD+LVM -> iSCSI -> VMWare

2017-12-09 Thread Brady Deetz
That's not a bad position. I have concerns with what I'm proposing, so a
hypervisor migration may actually bring less risk than a storage
abomination.

On Dec 9, 2017 7:09 PM, "Donny Davis"  wrote:

> What I am getting at is that instead of sinking a bunch of time into this
> bandaid, why not sink that time into a hypervisor migration. Seems well
> timed if you ask me.
>
> There are even tools to make that migration easier
>
> http://libguestfs.org/virt-v2v.1.html
>
> You should ultimately move your hypervisor instead of building a one off
> case for ceph. Ceph works really well if you stay inside the box. So does
> KVM. They work like Gang Buster's together.
>
> I know that doesn't really answer your OP, but this is what I would do.
>
> ~D
>
> On Sat, Dec 9, 2017 at 7:56 PM Brady Deetz  wrote:
>
>> We have over 150 VMs running in vmware. We also have 2PB of Ceph for
>> filesystem. With our vmware storage aging and not providing the IOPs we
>> need, we are considering and hoping to use ceph. Ultimately, yes we will
>> move to KVM, but in the short term, we probably need to stay on VMware.
>> On Dec 9, 2017 6:26 PM, "Donny Davis"  wrote:
>>
>>> Just curious but why not just use a hypervisor with rbd support? Are
>>> there VMware specific features you are reliant on?
>>>
>>> On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz  wrote:
>>>
 I'm testing using RBD as VMWare datastores. I'm currently testing with
 krbd+LVM on a tgt target hosted on a hypervisor.

 My Ceph cluster is HDD backed.

 In order to help with write latency, I added an SSD drive to my
 hypervisor and made it a writeback cache for the rbd via LVM. So far I've
 managed to smooth out my 4k write latency and have some pleasing results.

 Architecturally, my current plan is to deploy an iSCSI gateway on each
 hypervisor hosting that hypervisor's own datastore.

 Does anybody have any experience with this kind of configuration,
 especially with regard to LVM writeback caching combined with RBD?
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD+LVM -> iSCSI -> VMWare

2017-12-09 Thread Donny Davis
What I am getting at is that instead of sinking a bunch of time into this
bandaid, why not sink that time into a hypervisor migration. Seems well
timed if you ask me.

There are even tools to make that migration easier

http://libguestfs.org/virt-v2v.1.html

You should ultimately move your hypervisor instead of building a one off
case for ceph. Ceph works really well if you stay inside the box. So does
KVM. They work like Gang Buster's together.

I know that doesn't really answer your OP, but this is what I would do.

~D

On Sat, Dec 9, 2017 at 7:56 PM Brady Deetz  wrote:

> We have over 150 VMs running in vmware. We also have 2PB of Ceph for
> filesystem. With our vmware storage aging and not providing the IOPs we
> need, we are considering and hoping to use ceph. Ultimately, yes we will
> move to KVM, but in the short term, we probably need to stay on VMware.
> On Dec 9, 2017 6:26 PM, "Donny Davis"  wrote:
>
>> Just curious but why not just use a hypervisor with rbd support? Are
>> there VMware specific features you are reliant on?
>>
>> On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz  wrote:
>>
>>> I'm testing using RBD as VMWare datastores. I'm currently testing with
>>> krbd+LVM on a tgt target hosted on a hypervisor.
>>>
>>> My Ceph cluster is HDD backed.
>>>
>>> In order to help with write latency, I added an SSD drive to my
>>> hypervisor and made it a writeback cache for the rbd via LVM. So far I've
>>> managed to smooth out my 4k write latency and have some pleasing results.
>>>
>>> Architecturally, my current plan is to deploy an iSCSI gateway on each
>>> hypervisor hosting that hypervisor's own datastore.
>>>
>>> Does anybody have any experience with this kind of configuration,
>>> especially with regard to LVM writeback caching combined with RBD?
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD+LVM -> iSCSI -> VMWare

2017-12-09 Thread Brady Deetz
We have over 150 VMs running in vmware. We also have 2PB of Ceph for
filesystem. With our vmware storage aging and not providing the IOPs we
need, we are considering and hoping to use ceph. Ultimately, yes we will
move to KVM, but in the short term, we probably need to stay on VMware.

On Dec 9, 2017 6:26 PM, "Donny Davis"  wrote:

> Just curious but why not just use a hypervisor with rbd support? Are there
> VMware specific features you are reliant on?
>
> On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz  wrote:
>
>> I'm testing using RBD as VMWare datastores. I'm currently testing with
>> krbd+LVM on a tgt target hosted on a hypervisor.
>>
>> My Ceph cluster is HDD backed.
>>
>> In order to help with write latency, I added an SSD drive to my
>> hypervisor and made it a writeback cache for the rbd via LVM. So far I've
>> managed to smooth out my 4k write latency and have some pleasing results.
>>
>> Architecturally, my current plan is to deploy an iSCSI gateway on each
>> hypervisor hosting that hypervisor's own datastore.
>>
>> Does anybody have any experience with this kind of configuration,
>> especially with regard to LVM writeback caching combined with RBD?
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD+LVM -> iSCSI -> VMWare

2017-12-09 Thread Donny Davis
Just curious but why not just use a hypervisor with rbd support? Are there
VMware specific features you are reliant on?

On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz  wrote:

> I'm testing using RBD as VMWare datastores. I'm currently testing with
> krbd+LVM on a tgt target hosted on a hypervisor.
>
> My Ceph cluster is HDD backed.
>
> In order to help with write latency, I added an SSD drive to my hypervisor
> and made it a writeback cache for the rbd via LVM. So far I've managed to
> smooth out my 4k write latency and have some pleasing results.
>
> Architecturally, my current plan is to deploy an iSCSI gateway on each
> hypervisor hosting that hypervisor's own datastore.
>
> Does anybody have any experience with this kind of configuration,
> especially with regard to LVM writeback caching combined with RBD?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD+LVM -> iSCSI -> VMWare

2017-12-08 Thread Brady Deetz
I'm testing using RBD as VMWare datastores. I'm currently testing with
krbd+LVM on a tgt target hosted on a hypervisor.

My Ceph cluster is HDD backed.

In order to help with write latency, I added an SSD drive to my hypervisor
and made it a writeback cache for the rbd via LVM. So far I've managed to
smooth out my 4k write latency and have some pleasing results.

Architecturally, my current plan is to deploy an iSCSI gateway on each
hypervisor hosting that hypervisor's own datastore.

Does anybody have any experience with this kind of configuration,
especially with regard to LVM writeback caching combined with RBD?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com