Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Fariborz Navidan
The use case is public VPS hosting. On Mon, Jul 15, 2019 at 7:20 PM Andrija Panic wrote: > XFS ("puke" smiley from skype goes here..), some funny issues historically, > but should be "better" than ext4 (I still prefer ext4 if possible). > CEPH - awful a lot knowledge and expertise is required,

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Andrija Panic
XFS ("puke" smiley from skype goes here..), some funny issues historically, but should be "better" than ext4 (I still prefer ext4 if possible). CEPH - awful a lot knowledge and expertise is required, especially to tune it for NVME - but do you want LOCAL or DISTRIBUTED storage (since Ceph is

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Sven Vogel
I think I would go for ceph … __ Sven Vogel Teamlead Platform EWERK RZ GmbH Brühl 24, D-04109 Leipzig P +49 341 42649 - 11 F +49 341 42649 - 18 s.vo...@ewerk.com www.ewerk.com Geschäftsführer: Dr. Erik Wende, Hendrik Schubert, Frank Richter Registergericht: Leipzig HRB 17023 Zertifiziert

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Vladimir Melnik
Hello, Isn't xfs better than ext4 for that? Thanks. On Mon, Jul 15, 2019 at 06:23:11PM +0700, Ivan Kudryavtsev wrote: > if you use local fs, use just ext4 over the required disk topology which > gives the desired redundancy.

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Fariborz Navidan
Thanks to all On Mon, Jul 15, 2019 at 6:31 PM wrote: > What's the use case exactly, what are you aiming to achieve? > > Any layer you add will increase latency, NFS especially, even if it's > local. > > > On 2019-07-15 14:56, Fariborz Navidan wrote: > > Thank you for opinions. > > > > Also I

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread nux
What's the use case exactly, what are you aiming to achieve? Any layer you add will increase latency, NFS especially, even if it's local. On 2019-07-15 14:56, Fariborz Navidan wrote: Thank you for opinions. Also I would ask you another question. how would difference in mater of performance

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Fariborz Navidan
Thank you for opinions. Also I would ask you another question. how would difference in mater of performance between pure local storage and local nfs storage (local drives are exported and mounted on local machines)? On Mon, Jul 15, 2019 at 6:15 PM Ivan Kudryavtsev wrote: > ZFS is not a good

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Ivan Kudryavtsev
ZFS is not a good choice for high IO applications. Use the most simple layering as possible. пн, 15 июл. 2019 г., 18:50 Christoffer Pedersen : > Hello, > > ZFS is unfortunately not supported, otherwise I would have recommended > that. But if you are going local systems (no nfs/iscsi), ext4 would

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Christoffer Pedersen
Hello, ZFS is unfortunately not supported, otherwise I would have recommended that. But if you are going local systems (no nfs/iscsi), ext4 would be the way to go. On Mon, Jul 15, 2019 at 1:23 PM Ivan Kudryavtsev wrote: > Hi, > > if you use local fs, use just ext4 over the required disk

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Ivan Kudryavtsev
Hi, if you use local fs, use just ext4 over the required disk topology which gives the desired redundancy. E.g. JBOD, R0 work well when data safety policy is established and backups are maintained well. Otherwise look to R5, R10 or R6. пн, 15 июл. 2019 г., 18:05 : > Isn't that a bit apples

Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread nux
Isn't that a bit apples and oranges? Ceph is a network distributed thingy, not a local solution. I'd use linux/software raid + lvm, it's the only one supported (by CentOS/RedHat). ZFS on Linux could be interesting if it was supported by Cloudstack, but it is not, you'd end up using qcow2

[VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Fariborz Navidan
Hello, Which one do you think is faster to use for local soft Raid-0 for primary storage? Ceph, ZFS or Built-in soft raid manager of CentOS? Which one can gives us better IOPS and IO latency on NVMe SSD disks? The storage will be used for production cloud environment where arround 60 VMs will run