done that.
Hope that helps.
Regards,
Trent
On Thu, 16 May 2019 at 14:52, Alexandre DERUMIER
wrote:
> Many thanks for the analysis !
>
>
> I'm going to test with 4K on heavy mssql database to see if I'm seeing
> improvement on ios/latency.
> I'll report results i
nk to this download? Can only find some .cz site with
> the rpms.
>
>
> -Original Message-
> From: Martin Verges [mailto:martin.ver...@croit.io]
> Sent: vrijdag 10 mei 2019 10:21
> To: Trent Lloyd
> Cc: ceph-users
> Subject: Re: [ceph-users] Poor performance fo
I recently was investigating a performance problem for a reasonably sized
OpenStack deployment having around 220 OSDs (3.5" 7200 RPM SAS HDD) with
NVMe Journals. The primary workload is Windows guests backed by Cinder RBD
volumes.
This specific deployment is Ceph Jewel (FileStore + SimpleMessenger)
Jens-Christian Fischer writes:
>
> I think we (i.e. Christian) found the problem:
> We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as
he hit all disks, we started to experience these 120 second timeouts. We
realized that the QEMU process on the hypervisor is opening a