The use case is public VPS hosting.
On Mon, Jul 15, 2019 at 7:20 PM Andrija Panic
wrote:
> XFS ("puke" smiley from skype goes here..), some funny issues historically,
> but should be "better" than ext4 (I still prefer ext4 if possible).
> CEPH - awful a lot knowledge and expertise is required,
XFS ("puke" smiley from skype goes here..), some funny issues historically,
but should be "better" than ext4 (I still prefer ext4 if possible).
CEPH - awful a lot knowledge and expertise is required, especially to tune
it for NVME - but do you want LOCAL or DISTRIBUTED storage (since Ceph is
I think I would go for ceph …
__
Sven Vogel
Teamlead Platform
EWERK RZ GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 11
F +49 341 42649 - 18
s.vo...@ewerk.com
www.ewerk.com
Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 17023
Zertifiziert
Hello,
Isn't xfs better than ext4 for that?
Thanks.
On Mon, Jul 15, 2019 at 06:23:11PM +0700, Ivan Kudryavtsev wrote:
> if you use local fs, use just ext4 over the required disk topology which
> gives the desired redundancy.
Thanks to all
On Mon, Jul 15, 2019 at 6:31 PM wrote:
> What's the use case exactly, what are you aiming to achieve?
>
> Any layer you add will increase latency, NFS especially, even if it's
> local.
>
>
> On 2019-07-15 14:56, Fariborz Navidan wrote:
> > Thank you for opinions.
> >
> > Also I
What's the use case exactly, what are you aiming to achieve?
Any layer you add will increase latency, NFS especially, even if it's
local.
On 2019-07-15 14:56, Fariborz Navidan wrote:
Thank you for opinions.
Also I would ask you another question. how would difference in mater of
performance
Thank you for opinions.
Also I would ask you another question. how would difference in mater of
performance between pure local storage and local nfs storage (local drives
are exported and mounted on local machines)?
On Mon, Jul 15, 2019 at 6:15 PM Ivan Kudryavtsev
wrote:
> ZFS is not a good
ZFS is not a good choice for high IO applications. Use the most simple
layering as possible.
пн, 15 июл. 2019 г., 18:50 Christoffer Pedersen :
> Hello,
>
> ZFS is unfortunately not supported, otherwise I would have recommended
> that. But if you are going local systems (no nfs/iscsi), ext4 would
Hello,
ZFS is unfortunately not supported, otherwise I would have recommended
that. But if you are going local systems (no nfs/iscsi), ext4 would be the
way to go.
On Mon, Jul 15, 2019 at 1:23 PM Ivan Kudryavtsev
wrote:
> Hi,
>
> if you use local fs, use just ext4 over the required disk
Hi,
if you use local fs, use just ext4 over the required disk topology which
gives the desired redundancy.
E.g. JBOD, R0 work well when data safety policy is established and backups
are maintained well.
Otherwise look to R5, R10 or R6.
пн, 15 июл. 2019 г., 18:05 :
> Isn't that a bit apples
Isn't that a bit apples and oranges? Ceph is a network distributed
thingy, not a local solution.
I'd use linux/software raid + lvm, it's the only one supported (by
CentOS/RedHat).
ZFS on Linux could be interesting if it was supported by Cloudstack, but
it is not, you'd end up using qcow2
Hello,
Which one do you think is faster to use for local soft Raid-0 for primary
storage? Ceph, ZFS or Built-in soft raid manager of CentOS? Which one can
gives us better IOPS and IO latency on NVMe SSD disks? The storage will be
used for production cloud environment where arround 60 VMs will run
12 matches
Mail list logo