Thanks to all

On Mon, Jul 15, 2019 at 6:31 PM <n...@li.nux.ro> wrote:

> What's the use case exactly, what are you aiming to achieve?
>
> Any layer you add will increase latency, NFS especially, even if it's
> local.
>
>
> On 2019-07-15 14:56, Fariborz Navidan wrote:
> > Thank you for opinions.
> >
> > Also I would ask you another question. how would difference in mater of
> > performance between pure local storage and local nfs storage (local
> > drives
> > are exported and mounted on local machines)?
> >
> > On Mon, Jul 15, 2019 at 6:15 PM Ivan Kudryavtsev
> > <kudryavtsev...@bw-sw.com>
> > wrote:
> >
> >> ZFS is not a good choice for high IO applications. Use the most simple
> >> layering as possible.
> >>
> >> пн, 15 июл. 2019 г., 18:50 Christoffer Pedersen <v...@vrod.dk>:
> >>
> >> > Hello,
> >> >
> >> > ZFS is unfortunately not supported, otherwise I would have recommended
> >> > that. But if you are going local systems (no nfs/iscsi), ext4 would be
> >> the
> >> > way to go.
> >> >
> >> > On Mon, Jul 15, 2019 at 1:23 PM Ivan Kudryavtsev <
> >> kudryavtsev...@bw-sw.com
> >> > >
> >> > wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > if you use local fs, use just ext4 over the required disk topology
> >> which
> >> > > gives the desired redundancy.
> >> > >
> >> > > E.g. JBOD, R0 work well when data safety policy is established and
> >> > backups
> >> > > are maintained well.
> >> > >
> >> > > Otherwise look to R5, R10 or R6.
> >> > >
> >> > > пн, 15 июл. 2019 г., 18:05 <n...@li.nux.ro>:
> >> > >
> >> > > > Isn't that a bit apples and oranges? Ceph is a network distributed
> >> > > > thingy, not a local solution.
> >> > > >
> >> > > > I'd use linux/software raid + lvm, it's the only one supported (by
> >> > > > CentOS/RedHat).
> >> > > >
> >> > > > ZFS on Linux could be interesting if it was supported by
> Cloudstack,
> >> > but
> >> > > > it is not, you'd end up using qcow2 (COW) files on top of a COW
> >> > > > filesystem which could lead to issues. Also ZFS is not really the
> >> > > > fastest fs out there, though it does have some nice features.
> >> > > >
> >> > > > Did you really mean raid 0? I hope you have backups. :)
> >> > > >
> >> > > > hth
> >> > > >
> >> > > >
> >> > > > On 2019-07-15 11:49, Fariborz Navidan wrote:
> >> > > > > Hello,
> >> > > > >
> >> > > > > Which one do you think is faster to use for local soft Raid-0
> for
> >> > > > > primary
> >> > > > > storage? Ceph, ZFS or Built-in soft raid manager of CentOS?
> Which
> >> one
> >> > > > > can
> >> > > > > gives us better IOPS and IO latency on NVMe SSD disks? The
> storage
> >> > will
> >> > > > > be
> >> > > > > used for production cloud environment where arround 60 VMs will
> run
> >> > on
> >> > > > > top
> >> > > > > of it.
> >> > > > >
> >> > > > > Your ides are highly appreciated
> >> > > >
> >> > >
> >> >
> >> >
> >> > --
> >> > Thanks,
> >> > Chris pedersen
> >> >
> >>
>

Reply via email to