For primitive way for NFS HA,  you can consider is just using DRDB .

I think is not yet supported linstor here.



On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz <[email protected]> wrote:

> Hi
>
> So we plan to use linstor in parallel to ceph as a fast resource on nvme
> cards.
> Its advantage is that it natively supports zfs with deduplication and
> compression :-)
> The test results were more than passable.
>
> Regards,
> Piotr
>
>
> -----Original Message-----
> From: Mauro Ferraro - G2K Hosting <[email protected]>
> Sent: Thursday, October 28, 2021 2:02 PM
> To: [email protected]; Pratik Chandrakar <
> [email protected]>
> Subject: Re: Experience with clustered/shared filesystems based on SAN
> storage on KVM?
>
> Hi,
>
> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> finish the tests we can give you some approach for the results. Are someone
> already try this technology?.
>
> Regards,
>
> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >
> > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng <[email protected]> wrote:
> >
> >> I have similar consideration when start exploring  Cloudstack , but
> >> in reality  Clustered Filesystem is not easy to maintain.  You seems
> >> have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
> >> ,  ocfs recently only maintained in oracle linux.  I believe you do not
> want to
> >> choose solution that is very propriety .   Thus just SAN or ISCSI o is
> not
> >> really a direct solution here , except you want to encapsulate it in
> >> NFS and facing Cloudstack Storage.
> >>
> >> It work good on CEPH and NFS , but performance wise,  NFS is better .
> >> And all documentation and features you saw  in Cloudstack , it work
> >> perfectly on NFS.
> >>
> >> If you choose CEPH,  may be you have to compensate with some
> >> performance degradation,
> >>
> >>
> >>
> >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
> >> <[email protected]>
> >> wrote:
> >>
> >>> I've been using Ceph in prod for volumes for some time. Note that
> >> although
> >>> I had several cloudstack installations,  this one runs on top of
> >>> Cinder, but it basic translates as libvirt and rados.
> >>>
> >>> It is totally stable and performance IMHO is enough for virtualized
> >>> services.
> >>>
> >>> IO might suffer some penalization due the data replication inside Ceph.
> >>> Elasticsearch for instance, the degradation would be a bit worse as
> >>> there is replication also in the application size, but IMHO, unless
> >>> you need extreme low latency it would be ok.
> >>>
> >>>
> >>> Best,
> >>>
> >>> Leandro.
> >>>
> >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> >> [email protected]
> >>> wrote:
> >>>
> >>>> Hello community,
> >>>>
> >>>> today I need your experience and knowhow about clustered/shared
> >>>> filesystems based on SAN storage to be used with KVM.
> >>>> We need to consider about a clustered/shared filesystem based on
> >>>> SAN storage (no NFS or iSCSI), but do not have any knowhow or
> >>>> experience
> >> with
> >>>> this.
> >>>> Those I would like to ask if there any productive used environments
> >>>> out there based on SAN storage on KVM?
> >>>> If so, which clustered/shared filesystem you are using and how is
> >>>> your experience with that (stability, reliability, maintainability,
> >>> performance,
> >>>> useability,...)?
> >>>> Furthermore, if you had already to consider in the past between SAN
> >>>> storage or CEPH, I would also like to participate on your
> >> considerations
> >>>> and results :)
> >>>>
> >>>> Regards,
> >>>> Michael
> >>>>
> >>
> >> --
> >> Regards,
> >> Hean Seng
> >>
> >
>
>

-- 
Regards,
Hean Seng

Reply via email to