[ceph-users] Re: EC Pools w/ RBD - IOPs

2020-02-13 Thread Vitaliy Filippov
please do not even think about using an EC pool (k=2, m=1). See other posts here, just don't. Why not? -- With best regards, Vitaliy Filippov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Re: Understanding Bluestore performance characteristics

2020-02-04 Thread Vitaliy Filippov
-60038msec ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- With best regards, Vitaliy Filippov ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: Choosing suitable SSD for Ceph cluster

2019-10-25 Thread Vitaliy Filippov
W=601MiB/s (630MB/s)(35.2GiB/60003msec) fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4M --numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test write: IOPS=679, BW=2717MiB/s (2849MB/s)(159GiB/60005msec) -- With best regards, Vitali

[ceph-users] Re: Choosing suitable SSD for Ceph cluster

2019-10-24 Thread Vitaliy Filippov
Especially https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21 but I recommend you to read the whole article -- With best regards, Vitaliy Filippov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Re: Choosing suitable SSD for Ceph cluster

2019-10-24 Thread Vitaliy Filippov
elease (10) will handle the O_DSYNC flag differently? Perhaps I should simply invest in faster (and bigger) harddisks and forget the SSD-cluster idea? Thank you in advance for any help, Best Regards, Hermann -- With best regards, Vitaliy Filippov ___ ceph-

[ceph-users] Re: RDMA

2019-10-15 Thread Vitaliy Filippov
To unsubscribe send an email to ceph-users-le...@ceph.io -- With best regards, Vitaliy Filippov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Strange hardware behavior

2019-09-04 Thread Vitaliy Filippov
ber, 2019 15:18:23 Subject: Re: [ceph-users] Re: Strange hardware behavior Please never use dd for disk benchmarks. Use fio. For linear write: fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=write -runtime=60 -filename=/dev/sdX -- With be

[ceph-users] Re: Mapped rbd is very slow

2019-08-15 Thread Vitaliy Filippov
, Vitaliy Filippov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io