Hi, which proper hardware do you say?.

El 31/10/2021 a las 10:13, Wido den Hollander escribió:


Op 30-10-2021 om 05:47 schreef Hean Seng:
Hi

For CEPH, it is not expected to have all power down, or a sudden of power
down,  for a proper data center environment.


Ceph can handle a power outage just fine as long as you use the proper hardware.

I have Ceph seen survive many power outages and came back without any issues.

Wido

NFS is good, however other then the high availability limitation of it,
  NFS is filesystem formatted at storage end,  This indeed may cause to very
high CPU usage of Storage server if the IO requirement is high for VM.
Performance  issues may occur if  this happens.  This especially if you
hosted  database server and Email server, which require a lot of write of a
small files .

ISCSI and SANS is better for block storage requirement.  However in this
Cloudstack support of this ISCSI or SANS, it can only configure as local
storage,  Cluster Filesystem is nightmare .




On Sat, Oct 30, 2021 at 3:35 AM Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> wrote:

Ignazio, many thanks for your feedback.

In the past we try ceph and it works great, until an electrical outage
broken it and we don't want to continue with this technology at least at
it get better or we can geo replicate it in othe site.  Other thing is,
when something big occurs ceph take a lot of time to recovery and
repair, so this will leave you offline until the process finish, but you never know if your information is safe until finish, we can say, is not.
For a cluster of replica 3, of 80TB it can take a week or more. This is
not an option for us.

Previusly we use NFS as separated primary storages, and now we still
with NFS until we get a replacement. NFS is great too, because you can
get an stable solution with KVM and QCOW2, if something happends you
have lot of chances of start all again with low risk of degradation. You
can start all again in hours. The main problem is the performance
bottleneck and high availability of the VMs at storage side.

That is the main reason we want to test linstor, because it promise some features, like replication with DRDB, HA, and performance all in one. At
this point we cannot finish the configuration in ACS 4.16 RC2, because
there is not documentation and we are having some problem with Linstor,
ZFS and ACS that we are not able to discover.

What solution recommends for a ACS cluster for deploy aprox 1000 VMs?.

Regards,

Mauro

El 29/10/2021 a las 15:56, Ignazio Cassano escribió:
Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good
solution.
Clustered file system could be used if your virtualization nodes have
a lot of disks.
I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered
lvm can handle locks.
Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting
<mferr...@g2khosting.com> ha scritto:

     Hi,

     We are trying to make a lab with ACS 4.16 and Linstor. As soon as we      finish the tests we can give you some approach for the results. Are
     someone already try this technology?.

     Regards,

     El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
     > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
     >
     > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng <heans...@gmail.com>
     wrote:
     >
     >> I have similar consideration when start exploring Cloudstack ,
     but in
     >> reality  Clustered Filesystem is not easy to maintain.  You
     seems have
     >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in
     redhat ,  ocfs
     >> recently only maintained in oracle linux.  I believe you do not
     want to
     >> choose solution that is very propriety . Thus just SAN or
     ISCSI o is not
     >> really a direct solution here , except you want to encapsulate
     it in NFS
     >> and facing Cloudstack Storage.
     >>
     >> It work good on CEPH and NFS , but performance wise, NFS is
     better . And
     >> all documentation and features you saw  in Cloudstack , it work
     perfectly
     >> on NFS.
     >>
     >> If you choose CEPH,  may be you have to compensate with some
     performance
     >> degradation,
     >>
     >>
     >>
     >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes
<theflock...@gmail.com>
     >> wrote:
     >>
     >>> I've been using Ceph in prod for volumes for some time. Note that
     >> although
     >>> I had several cloudstack installations, this one runs on top
     of Cinder,
     >>> but it basic translates as libvirt and rados.
     >>>
     >>> It is totally stable and performance IMHO is enough for
     virtualized
     >>> services.
     >>>
     >>> IO might suffer some penalization due the data replication
     inside Ceph.
     >>> Elasticsearch for instance, the degradation would be a bit
     worse as there
     >>> is replication also in the application size, but IMHO, unless
     you need
     >>> extreme low latency it would be ok.
     >>>
     >>>
     >>> Best,
     >>>
     >>> Leandro.
     >>>
     >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
     >> michael.bru...@nttdata.com
     >>> wrote:
     >>>
     >>>> Hello community,
     >>>>
     >>>> today I need your experience and knowhow about clustered/shared
     >>>> filesystems based on SAN storage to be used with KVM.
     >>>> We need to consider about a clustered/shared filesystem based
     on SAN
     >>>> storage (no NFS or iSCSI), but do not have any knowhow or
     experience
     >> with
     >>>> this.
     >>>> Those I would like to ask if there any productive used
     environments out
     >>>> there based on SAN storage on KVM?
     >>>> If so, which clustered/shared filesystem you are using and
     how is your
     >>>> experience with that (stability, reliability, maintainability,
     >>> performance,
     >>>> useability,...)?
     >>>> Furthermore, if you had already to consider in the past
     between SAN
     >>>> storage or CEPH, I would also like to participate on your
     >> considerations
     >>>> and results :)
     >>>>
     >>>> Regards,
     >>>> Michael
     >>>>
     >>
     >> --
     >> Regards,
     >> Hean Seng
     >>
     >



Reply via email to