Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-11-01 Thread Mauro Ferraro - G2K Hosting

Hi, which proper hardware do you say?.


El 31/10/2021 a las 10:13, Wido den Hollander escribió:



Op 30-10-2021 om 05:47 schreef Hean Seng:

Hi

For CEPH, it is not expected to have all power down, or a sudden of 
power

down,  for a proper data center environment.



Ceph can handle a power outage just fine as long as you use the proper 
hardware.


I have Ceph seen survive many power outages and came back without any 
issues.


Wido


NFS is good, however other then the high availability limitation of it,
  NFS is filesystem formatted at storage end,  This indeed may cause 
to very

high CPU usage of Storage server if the IO requirement is high for VM.
Performance  issues may occur if  this happens.  This especially if you
hosted  database server and Email server, which require a lot of 
write of a

small files .

ISCSI and SANS is better for block storage requirement.  However in this
Cloudstack support of this ISCSI or SANS, it can only configure as local
storage,  Cluster Filesystem is nightmare .




On Sat, Oct 30, 2021 at 3:35 AM Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> wrote:


Ignazio, many thanks for your feedback.

In the past we try ceph and it works great, until an electrical outage
broken it and we don't want to continue with this technology at 
least at

it get better or we can geo replicate it in othe site.  Other thing is,
when something big occurs ceph take a lot of time to recovery and
repair, so this will leave you offline until the process finish, but 
you
never know if your information is safe until finish, we can say, is 
not.

For a cluster of replica 3, of 80TB it can take a week or more. This is
not an option for us.

Previusly we use NFS as separated primary storages, and now we still
with NFS until we get a replacement. NFS is great too, because you can
get an stable solution with KVM and QCOW2, if something happends you
have lot of chances of start all again with low risk of degradation. 
You

can start all again in hours. The main problem is the performance
bottleneck and high availability of the VMs at storage side.

That is the main reason we want to test linstor, because it promise 
some
features, like replication with DRDB, HA, and performance all in 
one. At

this point we cannot finish the configuration in ACS 4.16 RC2, because
there is not documentation and we are having some problem with Linstor,
ZFS and ACS that we are not able to discover.

What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.

Regards,

Mauro

El 29/10/2021 a las 15:56, Ignazio Cassano escribió:

Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good
solution.
Clustered file system could be used if your virtualization nodes have
a lot of disks.
I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered
lvm can handle locks.
Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting
 ha scritto:

 Hi,

 We are trying to make a lab with ACS 4.16 and Linstor. As soon 
as we
 finish the tests we can give you some approach for the 
results. Are

 someone already try this technology?.

 Regards,

 El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
 > Since NFS alone doesn't offer HA. What do you recommend for 
HA NFS?

 >
 > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng 
 wrote:
 >
 >> I have similar consideration when start exploring Cloudstack ,
 but in
 >> reality  Clustered Filesystem is not easy to maintain.  You
 seems have
 >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
 redhat ,  ocfs
 >> recently only maintained in oracle linux.  I believe you do 
not

 want to
 >> choose solution that is very propriety . Thus just SAN or
 ISCSI o is not
 >> really a direct solution here , except you want to encapsulate
 it in NFS
 >> and facing Cloudstack Storage.
 >>
 >> It work good on CEPH and NFS , but performance wise, NFS is
 better . And
 >> all documentation and features you saw  in Cloudstack , it 
work

 perfectly
 >> on NFS.
 >>
 >> If you choose CEPH,  may be you have to compensate with some
 performance
 >> degradation,
 >>
 >>
 >>
 >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes

 >> wrote:
 >>
 >>> I've been using Ceph in prod for volumes for some time. 
Note that

 >> although
 >>> I had several cloudstack installations, this one runs on top
 of Cinder,
 >>> but it basic translates as libvirt and rados.
 >>>
 >>> It is totally stable and performance IMHO is enough for
 virtualized
 >>> services.
 >>>
 >>> IO might suffer some penalization due the data replication
 inside Ceph.
 >>> Elasticsearch for instance, the degradation would be a 

Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-31 Thread Wido den Hollander




Op 30-10-2021 om 05:47 schreef Hean Seng:

Hi

For CEPH, it is not expected to have all power down, or a sudden of power
down,  for a proper data center environment.



Ceph can handle a power outage just fine as long as you use the proper 
hardware.


I have Ceph seen survive many power outages and came back without any 
issues.


Wido


NFS is good, however other then the high availability limitation of it,
  NFS is filesystem formatted at storage end,  This indeed may cause to very
high CPU usage of Storage server if the IO requirement is high for VM.
Performance  issues may occur if  this happens.  This especially if you
hosted  database server and Email server, which require a lot of write of a
small files .

ISCSI and SANS is better for block storage requirement.  However in this
Cloudstack support of this ISCSI or SANS, it can only configure as local
storage,  Cluster Filesystem is nightmare .




On Sat, Oct 30, 2021 at 3:35 AM Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> wrote:


Ignazio, many thanks for your feedback.

In the past we try ceph and it works great, until an electrical outage
broken it and we don't want to continue with this technology at least at
it get better or we can geo replicate it in othe site.  Other thing is,
when something big occurs ceph take a lot of time to recovery and
repair, so this will leave you offline until the process finish, but you
never know if your information is safe until finish, we can say, is not.
For a cluster of replica 3, of 80TB it can take a week or more. This is
not an option for us.

Previusly we use NFS as separated primary storages, and now we still
with NFS until we get a replacement. NFS is great too, because you can
get an stable solution with KVM and QCOW2, if something happends you
have lot of chances of start all again with low risk of degradation. You
can start all again in hours. The main problem is the performance
bottleneck and high availability of the VMs at storage side.

That is the main reason we want to test linstor, because it promise some
features, like replication with DRDB, HA, and performance all in one. At
this point we cannot finish the configuration in ACS 4.16 RC2, because
there is not documentation and we are having some problem with Linstor,
ZFS and ACS that we are not able to discover.

What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.

Regards,

Mauro

El 29/10/2021 a las 15:56, Ignazio Cassano escribió:

Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good
solution.
Clustered file system could be used if your virtualization nodes have
a lot of disks.
I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered
lvm can handle locks.
Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting
 ha scritto:

 Hi,

 We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
 finish the tests we can give you some approach for the results. Are
 someone already try this technology?.

 Regards,

 El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
 > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
 >
 > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng 
 wrote:
 >
 >> I have similar consideration when start exploring Cloudstack ,
 but in
 >> reality  Clustered Filesystem is not easy to maintain.  You
 seems have
 >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
 redhat ,  ocfs
 >> recently only maintained in oracle linux.  I believe you do not
 want to
 >> choose solution that is very propriety .   Thus just SAN or
 ISCSI o is not
 >> really a direct solution here , except you want to encapsulate
 it in NFS
 >> and facing Cloudstack Storage.
 >>
 >> It work good on CEPH and NFS , but performance wise, NFS is
 better . And
 >> all documentation and features you saw  in Cloudstack , it work
 perfectly
 >> on NFS.
 >>
 >> If you choose CEPH,  may be you have to compensate with some
 performance
 >> degradation,
 >>
 >>
 >>
 >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
 
 >> wrote:
 >>
 >>> I've been using Ceph in prod for volumes for some time. Note that
 >> although
 >>> I had several cloudstack installations,  this one runs on top
 of Cinder,
 >>> but it basic translates as libvirt and rados.
 >>>
 >>> It is totally stable and performance IMHO is enough for
 virtualized
 >>> services.
 >>>
 >>> IO might suffer some penalization due the data replication
 inside Ceph.
 >>> Elasticsearch for instance, the degradation would be a bit
 worse as there
 >>> is replication also in the application size, but IMHO, unless
 you need
 

Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-31 Thread Wido den Hollander




Op 28-10-2021 om 07:34 schreef Pratik Chandrakar:

Since NFS alone doesn't offer HA. What do you recommend for HA NFS?


We use TrueNAS M50 NFS boxes which have redundant controllers for example.

Our environment has both Ceph (~5PB) and TrueNAS M50 (500TB, NFS) as 
storage.


TrueNAS is used in case that Ceph doesn't meet the I/O latency 
requirement for some applications.


Wido



On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:


I have similar consideration when start exploring  Cloudstack , but in
reality  Clustered Filesystem is not easy to maintain.  You seems have
choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
recently only maintained in oracle linux.  I believe you do not want to
choose solution that is very propriety .   Thus just SAN or ISCSI o is not
really a direct solution here , except you want to encapsulate it in NFS
and facing Cloudstack Storage.

It work good on CEPH and NFS , but performance wise,  NFS is better . And
all documentation and features you saw  in Cloudstack , it work perfectly
on NFS.

If you choose CEPH,  may be you have to compensate with some performance
degradation,



On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
wrote:


I've been using Ceph in prod for volumes for some time. Note that

although

I had several cloudstack installations,  this one runs on top of Cinder,
but it basic translates as libvirt and rados.

It is totally stable and performance IMHO is enough for virtualized
services.

IO might suffer some penalization due the data replication inside Ceph.
Elasticsearch for instance, the degradation would be a bit worse as there
is replication also in the application size, but IMHO, unless you need
extreme low latency it would be ok.


Best,

Leandro.

On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <

michael.bru...@nttdata.com



wrote:


Hello community,

today I need your experience and knowhow about clustered/shared
filesystems based on SAN storage to be used with KVM.
We need to consider about a clustered/shared filesystem based on SAN
storage (no NFS or iSCSI), but do not have any knowhow or experience

with

this.
Those I would like to ask if there any productive used environments out
there based on SAN storage on KVM?
If so, which clustered/shared filesystem you are using and how is your
experience with that (stability, reliability, maintainability,

performance,

useability,...)?
Furthermore, if you had already to consider in the past between SAN
storage or CEPH, I would also like to participate on your

considerations

and results :)

Regards,
Michael






--
Regards,
Hean Seng






Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Hean Seng
Hi

For CEPH, it is not expected to have all power down, or a sudden of power
down,  for a proper data center environment.

NFS is good, however other then the high availability limitation of it,
 NFS is filesystem formatted at storage end,  This indeed may cause to very
high CPU usage of Storage server if the IO requirement is high for VM.
Performance  issues may occur if  this happens.  This especially if you
hosted  database server and Email server, which require a lot of write of a
small files .

ISCSI and SANS is better for block storage requirement.  However in this
Cloudstack support of this ISCSI or SANS, it can only configure as local
storage,  Cluster Filesystem is nightmare .




On Sat, Oct 30, 2021 at 3:35 AM Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> wrote:

> Ignazio, many thanks for your feedback.
>
> In the past we try ceph and it works great, until an electrical outage
> broken it and we don't want to continue with this technology at least at
> it get better or we can geo replicate it in othe site.  Other thing is,
> when something big occurs ceph take a lot of time to recovery and
> repair, so this will leave you offline until the process finish, but you
> never know if your information is safe until finish, we can say, is not.
> For a cluster of replica 3, of 80TB it can take a week or more. This is
> not an option for us.
>
> Previusly we use NFS as separated primary storages, and now we still
> with NFS until we get a replacement. NFS is great too, because you can
> get an stable solution with KVM and QCOW2, if something happends you
> have lot of chances of start all again with low risk of degradation. You
> can start all again in hours. The main problem is the performance
> bottleneck and high availability of the VMs at storage side.
>
> That is the main reason we want to test linstor, because it promise some
> features, like replication with DRDB, HA, and performance all in one. At
> this point we cannot finish the configuration in ACS 4.16 RC2, because
> there is not documentation and we are having some problem with Linstor,
> ZFS and ACS that we are not able to discover.
>
> What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.
>
> Regards,
>
> Mauro
>
> El 29/10/2021 a las 15:56, Ignazio Cassano escribió:
> > Hi Mauro, what would you like to store on the clustered file system ?
> > If you want use it for virtual machine disks I think nfs is a good
> > solution.
> > Clustered file system could be used if your virtualization nodes have
> > a lot of disks.
> > I usually I prefer use a nas or a San.
> > If you have a San you can use iscsi with clustered logical volumes.
> > Each logical volume can host a virtual machine volume and clustered
> > lvm can handle locks.
> > Ignazio
> >
> >
> >
> > Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting
> >  ha scritto:
> >
> > Hi,
> >
> > We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> > finish the tests we can give you some approach for the results. Are
> > someone already try this technology?.
> >
> > Regards,
> >
> > El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> > >
> > > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng 
> > wrote:
> > >
> > >> I have similar consideration when start exploring Cloudstack ,
> > but in
> > >> reality  Clustered Filesystem is not easy to maintain.  You
> > seems have
> > >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
> > redhat ,  ocfs
> > >> recently only maintained in oracle linux.  I believe you do not
> > want to
> > >> choose solution that is very propriety .   Thus just SAN or
> > ISCSI o is not
> > >> really a direct solution here , except you want to encapsulate
> > it in NFS
> > >> and facing Cloudstack Storage.
> > >>
> > >> It work good on CEPH and NFS , but performance wise, NFS is
> > better . And
> > >> all documentation and features you saw  in Cloudstack , it work
> > perfectly
> > >> on NFS.
> > >>
> > >> If you choose CEPH,  may be you have to compensate with some
> > performance
> > >> degradation,
> > >>
> > >>
> > >>
> > >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
> > 
> > >> wrote:
> > >>
> > >>> I've been using Ceph in prod for volumes for some time. Note that
> > >> although
> > >>> I had several cloudstack installations,  this one runs on top
> > of Cinder,
> > >>> but it basic translates as libvirt and rados.
> > >>>
> > >>> It is totally stable and performance IMHO is enough for
> > virtualized
> > >>> services.
> > >>>
> > >>> IO might suffer some penalization due the data replication
> > inside Ceph.
> > >>> Elasticsearch for instance, the degradation would be a bit
> > worse as there
> > >>> is replication 

Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Mauro Ferraro - G2K Hosting

Ignazio, many thanks for your feedback.

In the past we try ceph and it works great, until an electrical outage 
broken it and we don't want to continue with this technology at least at 
it get better or we can geo replicate it in othe site.  Other thing is, 
when something big occurs ceph take a lot of time to recovery and 
repair, so this will leave you offline until the process finish, but you 
never know if your information is safe until finish, we can say, is not. 
For a cluster of replica 3, of 80TB it can take a week or more. This is 
not an option for us.


Previusly we use NFS as separated primary storages, and now we still 
with NFS until we get a replacement. NFS is great too, because you can 
get an stable solution with KVM and QCOW2, if something happends you 
have lot of chances of start all again with low risk of degradation. You 
can start all again in hours. The main problem is the performance 
bottleneck and high availability of the VMs at storage side.


That is the main reason we want to test linstor, because it promise some 
features, like replication with DRDB, HA, and performance all in one. At 
this point we cannot finish the configuration in ACS 4.16 RC2, because 
there is not documentation and we are having some problem with Linstor, 
ZFS and ACS that we are not able to discover.


What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.

Regards,

Mauro

El 29/10/2021 a las 15:56, Ignazio Cassano escribió:

Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good 
solution.
Clustered file system could be used if your virtualization nodes have 
a lot of disks.

I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered 
lvm can handle locks.

Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting 
 ha scritto:


Hi,

We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
finish the tests we can give you some approach for the results. Are
someone already try this technology?.

Regards,

El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
>
> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng 
wrote:
>
>> I have similar consideration when start exploring Cloudstack ,
but in
>> reality  Clustered Filesystem is not easy to maintain.  You
seems have
>> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
redhat ,  ocfs
>> recently only maintained in oracle linux.  I believe you do not
want to
>> choose solution that is very propriety .   Thus just SAN or
ISCSI o is not
>> really a direct solution here , except you want to encapsulate
it in NFS
>> and facing Cloudstack Storage.
>>
>> It work good on CEPH and NFS , but performance wise, NFS is
better . And
>> all documentation and features you saw  in Cloudstack , it work
perfectly
>> on NFS.
>>
>> If you choose CEPH,  may be you have to compensate with some
performance
>> degradation,
>>
>>
>>
>> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes

>> wrote:
>>
>>> I've been using Ceph in prod for volumes for some time. Note that
>> although
>>> I had several cloudstack installations,  this one runs on top
of Cinder,
>>> but it basic translates as libvirt and rados.
>>>
>>> It is totally stable and performance IMHO is enough for
virtualized
>>> services.
>>>
>>> IO might suffer some penalization due the data replication
inside Ceph.
>>> Elasticsearch for instance, the degradation would be a bit
worse as there
>>> is replication also in the application size, but IMHO, unless
you need
>>> extreme low latency it would be ok.
>>>
>>>
>>> Best,
>>>
>>> Leandro.
>>>
>>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
>> michael.bru...@nttdata.com
>>> wrote:
>>>
 Hello community,

 today I need your experience and knowhow about clustered/shared
 filesystems based on SAN storage to be used with KVM.
 We need to consider about a clustered/shared filesystem based
on SAN
 storage (no NFS or iSCSI), but do not have any knowhow or
experience
>> with
 this.
 Those I would like to ask if there any productive used
environments out
 there based on SAN storage on KVM?
 If so, which clustered/shared filesystem you are using and
how is your
 experience with that (stability, reliability, maintainability,
>>> performance,
 useability,...)?
 Furthermore, if you had already to consider in the past
between SAN
 storage or CEPH, I would also like to participate 

Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Ignazio Cassano
Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good solution.
Clustered file system could be used if your virtualization nodes have a lot
of disks.
I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered lvm can
handle locks.
Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> ha scritto:

> Hi,
>
> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> finish the tests we can give you some approach for the results. Are
> someone already try this technology?.
>
> Regards,
>
> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >
> > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >
> >> I have similar consideration when start exploring  Cloudstack , but in
> >> reality  Clustered Filesystem is not easy to maintain.  You seems have
> >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
> >> recently only maintained in oracle linux.  I believe you do not want to
> >> choose solution that is very propriety .   Thus just SAN or ISCSI o is
> not
> >> really a direct solution here , except you want to encapsulate it in NFS
> >> and facing Cloudstack Storage.
> >>
> >> It work good on CEPH and NFS , but performance wise,  NFS is better .
> And
> >> all documentation and features you saw  in Cloudstack , it work
> perfectly
> >> on NFS.
> >>
> >> If you choose CEPH,  may be you have to compensate with some performance
> >> degradation,
> >>
> >>
> >>
> >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
> >> wrote:
> >>
> >>> I've been using Ceph in prod for volumes for some time. Note that
> >> although
> >>> I had several cloudstack installations,  this one runs on top of
> Cinder,
> >>> but it basic translates as libvirt and rados.
> >>>
> >>> It is totally stable and performance IMHO is enough for virtualized
> >>> services.
> >>>
> >>> IO might suffer some penalization due the data replication inside Ceph.
> >>> Elasticsearch for instance, the degradation would be a bit worse as
> there
> >>> is replication also in the application size, but IMHO, unless you need
> >>> extreme low latency it would be ok.
> >>>
> >>>
> >>> Best,
> >>>
> >>> Leandro.
> >>>
> >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> >> michael.bru...@nttdata.com
> >>> wrote:
> >>>
>  Hello community,
> 
>  today I need your experience and knowhow about clustered/shared
>  filesystems based on SAN storage to be used with KVM.
>  We need to consider about a clustered/shared filesystem based on SAN
>  storage (no NFS or iSCSI), but do not have any knowhow or experience
> >> with
>  this.
>  Those I would like to ask if there any productive used environments
> out
>  there based on SAN storage on KVM?
>  If so, which clustered/shared filesystem you are using and how is your
>  experience with that (stability, reliability, maintainability,
> >>> performance,
>  useability,...)?
>  Furthermore, if you had already to consider in the past between SAN
>  storage or CEPH, I would also like to participate on your
> >> considerations
>  and results :)
> 
>  Regards,
>  Michael
> 
> >>
> >> --
> >> Regards,
> >> Hean Seng
> >>
> >
>


Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Hean Seng
Hi Vivek

Which part of XCP xen better then  KVM ?  Performance ?Is tht NFS for
XCP also ?

On Fri, Oct 29, 2021 at 4:14 PM Vivek Kumar 
wrote:

> I have been using GFS2 with shared mount point in production KVM since
> long, Trust me you need to have an expert to manage your whole cluster
> otherwise it becomes very hard to manage, NFS works pretty fine with KVM,
> if you are planning to use ISCSi or FC,  XenServer/XCP and VMware works far
> far better then KVM  and very easy to manage.
>
>
>
>
> Vivek Kumar
> Sr. Manager - Cloud & DevOps
> IndiQus Technologies
> M +91 7503460090
> www.indiqus.com
>
>
>
>
> > On 29-Oct-2021, at 1:14 PM, Hean Seng  wrote:
> >
> > For primitive way for NFS HA,  you can consider is just using DRDB .
> >
> > I think is not yet supported linstor here.
> >
> >
> >
> > On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz  wrote:
> >
> >> Hi
> >>
> >> So we plan to use linstor in parallel to ceph as a fast resource on nvme
> >> cards.
> >> Its advantage is that it natively supports zfs with deduplication and
> >> compression :-)
> >> The test results were more than passable.
> >>
> >> Regards,
> >> Piotr
> >>
> >>
> >> -Original Message-
> >> From: Mauro Ferraro - G2K Hosting 
> >> Sent: Thursday, October 28, 2021 2:02 PM
> >> To: users@cloudstack.apache.org; Pratik Chandrakar <
> >> chandrakarpra...@gmail.com>
> >> Subject: Re: Experience with clustered/shared filesystems based on SAN
> >> storage on KVM?
> >>
> >> Hi,
> >>
> >> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> >> finish the tests we can give you some approach for the results. Are
> someone
> >> already try this technology?.
> >>
> >> Regards,
> >>
> >> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> >>> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >>>
> >>> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >>>
>  I have similar consideration when start exploring  Cloudstack , but
>  in reality  Clustered Filesystem is not easy to maintain.  You seems
>  have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
>  ,  ocfs recently only maintained in oracle linux.  I believe you do
> not
> >> want to
>  choose solution that is very propriety .   Thus just SAN or ISCSI o is
> >> not
>  really a direct solution here , except you want to encapsulate it in
>  NFS and facing Cloudstack Storage.
> 
>  It work good on CEPH and NFS , but performance wise,  NFS is better .
>  And all documentation and features you saw  in Cloudstack , it work
>  perfectly on NFS.
> 
>  If you choose CEPH,  may be you have to compensate with some
>  performance degradation,
> 
> 
> 
>  On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
>  
>  wrote:
> 
> > I've been using Ceph in prod for volumes for some time. Note that
>  although
> > I had several cloudstack installations,  this one runs on top of
> > Cinder, but it basic translates as libvirt and rados.
> >
> > It is totally stable and performance IMHO is enough for virtualized
> > services.
> >
> > IO might suffer some penalization due the data replication inside
> Ceph.
> > Elasticsearch for instance, the degradation would be a bit worse as
> > there is replication also in the application size, but IMHO, unless
> > you need extreme low latency it would be ok.
> >
> >
> > Best,
> >
> > Leandro.
> >
> > On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
>  michael.bru...@nttdata.com
> > wrote:
> >
> >> Hello community,
> >>
> >> today I need your experience and knowhow about clustered/shared
> >> filesystems based on SAN storage to be used with KVM.
> >> We need to consider about a clustered/shared filesystem based on
> >> SAN storage (no NFS or iSCSI), but do not have any knowhow or
> >> experience
>  with
> >> this.
> >> Those I would like to ask if there any productive used environments
> >> out there based on SAN storage on KVM?
> >> If so, which clustered/shared filesystem you are using and how is
> >> your experience with that (stability, reliability, maintainability,
> > performance,
> >> useability,...)?
> >> Furthermore, if you had already to consider in the past between SAN
> >> storage or CEPH, I would also like to participate on your
>  considerations
> >> and results :)
> >>
> >> Regards,
> >> Michael
> >>
> 
>  --
>  Regards,
>  Hean Seng
> 
> >>>
> >>
> >>
> >
> > --
> > Regards,
> > Hean Seng
>
>

-- 
Regards,
Hean Seng


Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Vivek Kumar
I have been using GFS2 with shared mount point in production KVM since long, 
Trust me you need to have an expert to manage your whole cluster otherwise it 
becomes very hard to manage, NFS works pretty fine with KVM, if you are 
planning to use ISCSi or FC,  XenServer/XCP and VMware works far far better 
then KVM  and very easy to manage. 




Vivek Kumar
Sr. Manager - Cloud & DevOps 
IndiQus Technologies
M +91 7503460090 
www.indiqus.com




> On 29-Oct-2021, at 1:14 PM, Hean Seng  wrote:
> 
> For primitive way for NFS HA,  you can consider is just using DRDB .
> 
> I think is not yet supported linstor here.
> 
> 
> 
> On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz  wrote:
> 
>> Hi
>> 
>> So we plan to use linstor in parallel to ceph as a fast resource on nvme
>> cards.
>> Its advantage is that it natively supports zfs with deduplication and
>> compression :-)
>> The test results were more than passable.
>> 
>> Regards,
>> Piotr
>> 
>> 
>> -Original Message-
>> From: Mauro Ferraro - G2K Hosting 
>> Sent: Thursday, October 28, 2021 2:02 PM
>> To: users@cloudstack.apache.org; Pratik Chandrakar <
>> chandrakarpra...@gmail.com>
>> Subject: Re: Experience with clustered/shared filesystems based on SAN
>> storage on KVM?
>> 
>> Hi,
>> 
>> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
>> finish the tests we can give you some approach for the results. Are someone
>> already try this technology?.
>> 
>> Regards,
>> 
>> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
>>> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
>>> 
>>> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
>>> 
 I have similar consideration when start exploring  Cloudstack , but
 in reality  Clustered Filesystem is not easy to maintain.  You seems
 have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
 ,  ocfs recently only maintained in oracle linux.  I believe you do not
>> want to
 choose solution that is very propriety .   Thus just SAN or ISCSI o is
>> not
 really a direct solution here , except you want to encapsulate it in
 NFS and facing Cloudstack Storage.
 
 It work good on CEPH and NFS , but performance wise,  NFS is better .
 And all documentation and features you saw  in Cloudstack , it work
 perfectly on NFS.
 
 If you choose CEPH,  may be you have to compensate with some
 performance degradation,
 
 
 
 On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
 
 wrote:
 
> I've been using Ceph in prod for volumes for some time. Note that
 although
> I had several cloudstack installations,  this one runs on top of
> Cinder, but it basic translates as libvirt and rados.
> 
> It is totally stable and performance IMHO is enough for virtualized
> services.
> 
> IO might suffer some penalization due the data replication inside Ceph.
> Elasticsearch for instance, the degradation would be a bit worse as
> there is replication also in the application size, but IMHO, unless
> you need extreme low latency it would be ok.
> 
> 
> Best,
> 
> Leandro.
> 
> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
 michael.bru...@nttdata.com
> wrote:
> 
>> Hello community,
>> 
>> today I need your experience and knowhow about clustered/shared
>> filesystems based on SAN storage to be used with KVM.
>> We need to consider about a clustered/shared filesystem based on
>> SAN storage (no NFS or iSCSI), but do not have any knowhow or
>> experience
 with
>> this.
>> Those I would like to ask if there any productive used environments
>> out there based on SAN storage on KVM?
>> If so, which clustered/shared filesystem you are using and how is
>> your experience with that (stability, reliability, maintainability,
> performance,
>> useability,...)?
>> Furthermore, if you had already to consider in the past between SAN
>> storage or CEPH, I would also like to participate on your
 considerations
>> and results :)
>> 
>> Regards,
>> Michael
>> 
 
 --
 Regards,
 Hean Seng
 
>>> 
>> 
>> 
> 
> -- 
> Regards,
> Hean Seng



Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Hean Seng
For primitive way for NFS HA,  you can consider is just using DRDB .

I think is not yet supported linstor here.



On Fri, Oct 29, 2021 at 2:29 PM Piotr Pisz  wrote:

> Hi
>
> So we plan to use linstor in parallel to ceph as a fast resource on nvme
> cards.
> Its advantage is that it natively supports zfs with deduplication and
> compression :-)
> The test results were more than passable.
>
> Regards,
> Piotr
>
>
> -Original Message-
> From: Mauro Ferraro - G2K Hosting 
> Sent: Thursday, October 28, 2021 2:02 PM
> To: users@cloudstack.apache.org; Pratik Chandrakar <
> chandrakarpra...@gmail.com>
> Subject: Re: Experience with clustered/shared filesystems based on SAN
> storage on KVM?
>
> Hi,
>
> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> finish the tests we can give you some approach for the results. Are someone
> already try this technology?.
>
> Regards,
>
> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >
> > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >
> >> I have similar consideration when start exploring  Cloudstack , but
> >> in reality  Clustered Filesystem is not easy to maintain.  You seems
> >> have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat
> >> ,  ocfs recently only maintained in oracle linux.  I believe you do not
> want to
> >> choose solution that is very propriety .   Thus just SAN or ISCSI o is
> not
> >> really a direct solution here , except you want to encapsulate it in
> >> NFS and facing Cloudstack Storage.
> >>
> >> It work good on CEPH and NFS , but performance wise,  NFS is better .
> >> And all documentation and features you saw  in Cloudstack , it work
> >> perfectly on NFS.
> >>
> >> If you choose CEPH,  may be you have to compensate with some
> >> performance degradation,
> >>
> >>
> >>
> >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
> >> 
> >> wrote:
> >>
> >>> I've been using Ceph in prod for volumes for some time. Note that
> >> although
> >>> I had several cloudstack installations,  this one runs on top of
> >>> Cinder, but it basic translates as libvirt and rados.
> >>>
> >>> It is totally stable and performance IMHO is enough for virtualized
> >>> services.
> >>>
> >>> IO might suffer some penalization due the data replication inside Ceph.
> >>> Elasticsearch for instance, the degradation would be a bit worse as
> >>> there is replication also in the application size, but IMHO, unless
> >>> you need extreme low latency it would be ok.
> >>>
> >>>
> >>> Best,
> >>>
> >>> Leandro.
> >>>
> >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> >> michael.bru...@nttdata.com
> >>> wrote:
> >>>
>  Hello community,
> 
>  today I need your experience and knowhow about clustered/shared
>  filesystems based on SAN storage to be used with KVM.
>  We need to consider about a clustered/shared filesystem based on
>  SAN storage (no NFS or iSCSI), but do not have any knowhow or
>  experience
> >> with
>  this.
>  Those I would like to ask if there any productive used environments
>  out there based on SAN storage on KVM?
>  If so, which clustered/shared filesystem you are using and how is
>  your experience with that (stability, reliability, maintainability,
> >>> performance,
>  useability,...)?
>  Furthermore, if you had already to consider in the past between SAN
>  storage or CEPH, I would also like to participate on your
> >> considerations
>  and results :)
> 
>  Regards,
>  Michael
> 
> >>
> >> --
> >> Regards,
> >> Hean Seng
> >>
> >
>
>

-- 
Regards,
Hean Seng


RE: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Piotr Pisz
Hi

So we plan to use linstor in parallel to ceph as a fast resource on nvme cards.
Its advantage is that it natively supports zfs with deduplication and 
compression :-)
The test results were more than passable.

Regards,
Piotr


-Original Message-
From: Mauro Ferraro - G2K Hosting  
Sent: Thursday, October 28, 2021 2:02 PM
To: users@cloudstack.apache.org; Pratik Chandrakar 
Subject: Re: Experience with clustered/shared filesystems based on SAN storage 
on KVM?

Hi,

We are trying to make a lab with ACS 4.16 and Linstor. As soon as we finish the 
tests we can give you some approach for the results. Are someone already try 
this technology?.

Regards,

El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
>
> On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
>
>> I have similar consideration when start exploring  Cloudstack , but 
>> in reality  Clustered Filesystem is not easy to maintain.  You seems 
>> have choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat 
>> ,  ocfs recently only maintained in oracle linux.  I believe you do not want 
>> to
>> choose solution that is very propriety .   Thus just SAN or ISCSI o is not
>> really a direct solution here , except you want to encapsulate it in 
>> NFS and facing Cloudstack Storage.
>>
>> It work good on CEPH and NFS , but performance wise,  NFS is better . 
>> And all documentation and features you saw  in Cloudstack , it work 
>> perfectly on NFS.
>>
>> If you choose CEPH,  may be you have to compensate with some 
>> performance degradation,
>>
>>
>>
>> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
>> 
>> wrote:
>>
>>> I've been using Ceph in prod for volumes for some time. Note that
>> although
>>> I had several cloudstack installations,  this one runs on top of 
>>> Cinder, but it basic translates as libvirt and rados.
>>>
>>> It is totally stable and performance IMHO is enough for virtualized 
>>> services.
>>>
>>> IO might suffer some penalization due the data replication inside Ceph.
>>> Elasticsearch for instance, the degradation would be a bit worse as 
>>> there is replication also in the application size, but IMHO, unless 
>>> you need extreme low latency it would be ok.
>>>
>>>
>>> Best,
>>>
>>> Leandro.
>>>
>>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
>> michael.bru...@nttdata.com
>>> wrote:
>>>
 Hello community,

 today I need your experience and knowhow about clustered/shared 
 filesystems based on SAN storage to be used with KVM.
 We need to consider about a clustered/shared filesystem based on 
 SAN storage (no NFS or iSCSI), but do not have any knowhow or 
 experience
>> with
 this.
 Those I would like to ask if there any productive used environments 
 out there based on SAN storage on KVM?
 If so, which clustered/shared filesystem you are using and how is 
 your experience with that (stability, reliability, maintainability,
>>> performance,
 useability,...)?
 Furthermore, if you had already to consider in the past between SAN 
 storage or CEPH, I would also like to participate on your
>> considerations
 and results :)

 Regards,
 Michael

>>
>> --
>> Regards,
>> Hean Seng
>>
>



Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-28 Thread Mauro Ferraro - G2K Hosting

Hi,

We are trying to make a lab with ACS 4.16 and Linstor. As soon as we 
finish the tests we can give you some approach for the results. Are 
someone already try this technology?.


Regards,

El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:

Since NFS alone doesn't offer HA. What do you recommend for HA NFS?

On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:


I have similar consideration when start exploring  Cloudstack , but in
reality  Clustered Filesystem is not easy to maintain.  You seems have
choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
recently only maintained in oracle linux.  I believe you do not want to
choose solution that is very propriety .   Thus just SAN or ISCSI o is not
really a direct solution here , except you want to encapsulate it in NFS
and facing Cloudstack Storage.

It work good on CEPH and NFS , but performance wise,  NFS is better . And
all documentation and features you saw  in Cloudstack , it work perfectly
on NFS.

If you choose CEPH,  may be you have to compensate with some performance
degradation,



On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
wrote:


I've been using Ceph in prod for volumes for some time. Note that

although

I had several cloudstack installations,  this one runs on top of Cinder,
but it basic translates as libvirt and rados.

It is totally stable and performance IMHO is enough for virtualized
services.

IO might suffer some penalization due the data replication inside Ceph.
Elasticsearch for instance, the degradation would be a bit worse as there
is replication also in the application size, but IMHO, unless you need
extreme low latency it would be ok.


Best,

Leandro.

On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <

michael.bru...@nttdata.com

wrote:


Hello community,

today I need your experience and knowhow about clustered/shared
filesystems based on SAN storage to be used with KVM.
We need to consider about a clustered/shared filesystem based on SAN
storage (no NFS or iSCSI), but do not have any knowhow or experience

with

this.
Those I would like to ask if there any productive used environments out
there based on SAN storage on KVM?
If so, which clustered/shared filesystem you are using and how is your
experience with that (stability, reliability, maintainability,

performance,

useability,...)?
Furthermore, if you had already to consider in the past between SAN
storage or CEPH, I would also like to participate on your

considerations

and results :)

Regards,
Michael



--
Regards,
Hean Seng





Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-27 Thread Pratik Chandrakar
Since NFS alone doesn't offer HA. What do you recommend for HA NFS?

On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:

> I have similar consideration when start exploring  Cloudstack , but in
> reality  Clustered Filesystem is not easy to maintain.  You seems have
> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
> recently only maintained in oracle linux.  I believe you do not want to
> choose solution that is very propriety .   Thus just SAN or ISCSI o is not
> really a direct solution here , except you want to encapsulate it in NFS
> and facing Cloudstack Storage.
>
> It work good on CEPH and NFS , but performance wise,  NFS is better . And
> all documentation and features you saw  in Cloudstack , it work perfectly
> on NFS.
>
> If you choose CEPH,  may be you have to compensate with some performance
> degradation,
>
>
>
> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
> wrote:
>
> > I've been using Ceph in prod for volumes for some time. Note that
> although
> > I had several cloudstack installations,  this one runs on top of Cinder,
> > but it basic translates as libvirt and rados.
> >
> > It is totally stable and performance IMHO is enough for virtualized
> > services.
> >
> > IO might suffer some penalization due the data replication inside Ceph.
> > Elasticsearch for instance, the degradation would be a bit worse as there
> > is replication also in the application size, but IMHO, unless you need
> > extreme low latency it would be ok.
> >
> >
> > Best,
> >
> > Leandro.
> >
> > On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> michael.bru...@nttdata.com
> > >
> > wrote:
> >
> > > Hello community,
> > >
> > > today I need your experience and knowhow about clustered/shared
> > > filesystems based on SAN storage to be used with KVM.
> > > We need to consider about a clustered/shared filesystem based on SAN
> > > storage (no NFS or iSCSI), but do not have any knowhow or experience
> with
> > > this.
> > > Those I would like to ask if there any productive used environments out
> > > there based on SAN storage on KVM?
> > > If so, which clustered/shared filesystem you are using and how is your
> > > experience with that (stability, reliability, maintainability,
> > performance,
> > > useability,...)?
> > > Furthermore, if you had already to consider in the past between SAN
> > > storage or CEPH, I would also like to participate on your
> considerations
> > > and results :)
> > >
> > > Regards,
> > > Michael
> > >
> >
>
>
> --
> Regards,
> Hean Seng
>


-- 
*Regards,*
*Pratik Chandrakar*


Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-27 Thread Hean Seng
I have similar consideration when start exploring  Cloudstack , but in
reality  Clustered Filesystem is not easy to maintain.  You seems have
choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
recently only maintained in oracle linux.  I believe you do not want to
choose solution that is very propriety .   Thus just SAN or ISCSI o is not
really a direct solution here , except you want to encapsulate it in NFS
and facing Cloudstack Storage.

It work good on CEPH and NFS , but performance wise,  NFS is better . And
all documentation and features you saw  in Cloudstack , it work perfectly
on NFS.

If you choose CEPH,  may be you have to compensate with some performance
degradation,



On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
wrote:

> I've been using Ceph in prod for volumes for some time. Note that although
> I had several cloudstack installations,  this one runs on top of Cinder,
> but it basic translates as libvirt and rados.
>
> It is totally stable and performance IMHO is enough for virtualized
> services.
>
> IO might suffer some penalization due the data replication inside Ceph.
> Elasticsearch for instance, the degradation would be a bit worse as there
> is replication also in the application size, but IMHO, unless you need
> extreme low latency it would be ok.
>
>
> Best,
>
> Leandro.
>
> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael  >
> wrote:
>
> > Hello community,
> >
> > today I need your experience and knowhow about clustered/shared
> > filesystems based on SAN storage to be used with KVM.
> > We need to consider about a clustered/shared filesystem based on SAN
> > storage (no NFS or iSCSI), but do not have any knowhow or experience with
> > this.
> > Those I would like to ask if there any productive used environments out
> > there based on SAN storage on KVM?
> > If so, which clustered/shared filesystem you are using and how is your
> > experience with that (stability, reliability, maintainability,
> performance,
> > useability,...)?
> > Furthermore, if you had already to consider in the past between SAN
> > storage or CEPH, I would also like to participate on your considerations
> > and results :)
> >
> > Regards,
> > Michael
> >
>


-- 
Regards,
Hean Seng


Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-27 Thread Leandro Mendes
I've been using Ceph in prod for volumes for some time. Note that although
I had several cloudstack installations,  this one runs on top of Cinder,
but it basic translates as libvirt and rados.

It is totally stable and performance IMHO is enough for virtualized
services.

IO might suffer some penalization due the data replication inside Ceph.
Elasticsearch for instance, the degradation would be a bit worse as there
is replication also in the application size, but IMHO, unless you need
extreme low latency it would be ok.


Best,

Leandro.

On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael 
wrote:

> Hello community,
>
> today I need your experience and knowhow about clustered/shared
> filesystems based on SAN storage to be used with KVM.
> We need to consider about a clustered/shared filesystem based on SAN
> storage (no NFS or iSCSI), but do not have any knowhow or experience with
> this.
> Those I would like to ask if there any productive used environments out
> there based on SAN storage on KVM?
> If so, which clustered/shared filesystem you are using and how is your
> experience with that (stability, reliability, maintainability, performance,
> useability,...)?
> Furthermore, if you had already to consider in the past between SAN
> storage or CEPH, I would also like to participate on your considerations
> and results :)
>
> Regards,
> Michael
>


Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-27 Thread Andrija Panic
a.v.o.i.d = due to clustered file system stability...

CEPH = an awful of knowledge required to have this in production - and
definitively a much better/stable choice than clustered file systems.

Best,

On Thu, 21 Oct 2021 at 11:20, Brussk, Michael 
wrote:

> Hello community,
>
> today I need your experience and knowhow about clustered/shared
> filesystems based on SAN storage to be used with KVM.
> We need to consider about a clustered/shared filesystem based on SAN
> storage (no NFS or iSCSI), but do not have any knowhow or experience with
> this.
> Those I would like to ask if there any productive used environments out
> there based on SAN storage on KVM?
> If so, which clustered/shared filesystem you are using and how is your
> experience with that (stability, reliability, maintainability, performance,
> useability,...)?
> Furthermore, if you had already to consider in the past between SAN
> storage or CEPH, I would also like to participate on your considerations
> and results :)
>
> Regards,
> Michael
>


-- 

Andrija Panić


Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-21 Thread Brussk, Michael
Hello community,

today I need your experience and knowhow about clustered/shared filesystems 
based on SAN storage to be used with KVM.
We need to consider about a clustered/shared filesystem based on SAN storage 
(no NFS or iSCSI), but do not have any knowhow or experience with this.
Those I would like to ask if there any productive used environments out there 
based on SAN storage on KVM?
If so, which clustered/shared filesystem you are using and how is your 
experience with that (stability, reliability, maintainability, performance, 
useability,...)?
Furthermore, if you had already to consider in the past between SAN storage or 
CEPH, I would also like to participate on your considerations and results :)

Regards,
Michael