What are the specs of your nodes? And what specific harddisks are you using?

On Fri, May 29, 2020, 18:41 Salsa <[email protected]> wrote:

> I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3
> replica rbd pool and some images and presented them to a Vmware host via
> ISCSI, but the write performance is so bad the I managed to freeze a VM
> doing a big rsync to a datastore inside ceph and had to reboot it's host
> (seems I've filled up Vmware's ISCSI queue).
>
> Right now I'm getting write latencies from 20ms to 80 ms (per OSD) and
> sometimes peaking at 600 ms (per OSD).
> Client throughput is giving me around 4 MBs.
>
> Using a 4MB stripe 1 image I got 1.955..359 B/s inside the VM.
> On a 1MB stripe 1 I got 2.323.206 B/s inside the same VM.
>
> I think the performance is way too slow, much more than should be and that
> I can fix this by correcting some configuration.
>
> Any advices?
>
> --
> Salsa
>
> Sent with [ProtonMail](https://protonmail.com) Secure Email.
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to