Thank you all
My goal is to have an SSD based Ceph ( NVME + SSD) cluster so I need to
consider performance as well as reliability
( although I do realize that a performant cluster that breaks my VMware is
not ideal ;-))
It appears that NFS is the safe way to do it but will it be the bottleneck
Hi,
we use PetaSAN for our VMWare-Cluster. It provides an webinterface for
management and does clustered active-active ISCSI. For us the easy
management was the point to choose this, so we need not to think about
how to configure ISCSI...
Regards,
Dennis
Am 28.05.2018 um 21:42 schrieb
On Mon, May 28, 2018 at 3:42 PM, Steven Vacaroaia wrote:
> Hi,
>
> I need to design and build a storage platform that will be "consumed" mainly
> by VMWare
>
> CEPH is my first choice
>
> As far as I can see, there are 3 ways CEPH storage can be made available to
> VMWare
>
> 1. iSCSI
> 2.
We are using the iSCSI gateway in ceph-12.2 with vsphere-6.5 as the client.
It's an active/passive setup, per. LUN.
We choose this solution because that's what we could get RH support for and it
sticks to the "no SPOF" philosophy.
Performance is ~25-30% slower then krbd mounting the same rbd
Hi,
I need to design and build a storage platform that will be "consumed"
mainly by VMWare
CEPH is my first choice
As far as I can see, there are 3 ways CEPH storage can be made available to
VMWare
1. iSCSI
2. NFS-Ganesha
3. mounted rbd to a lInux NFS server
Any suggestions / advice as to
You might look into open vstorage as a gateway into ceph.
On Mon, May 28, 2018, 2:42 PM Steven Vacaroaia wrote:
> Hi,
>
> I need to design and build a storage platform that will be "consumed"
> mainly by VMWare
>
> CEPH is my first choice
>
> As far as I can see, there are 3 ways CEPH storage